A Conversation with Ásgeir Gunnarsson about Power BI in the Enterprise

As I continue to explore and develop best practices for managing serious business-scale Power BI solutions, I’m having conversations with recognized community leaders. Last month I chatted with Ásgeir Gunnarsson on the SQL Train ride from Portland to PASS Summit in Seattle. Ásgeir is a data platform MVP and seasoned Business Intelligence expert from Reykjavik, Iceland who works as Chief Consultant for Datheos, a Microsoft-focused BI and Analytics consultancy in Copenhagen. He leads the Icelandic Power BI User Group and PASS Chapter.

He gave an inspiring presentation at Oregon SQL Saturday about Enterprise Power BI Development. You can view his presentation deck from the Schedule page here.

Ásgeir talked primarily about the development life cycle for for projects centered around Power BI, data and object governance. As I’ve mentioned in my earlier posts on this topic, the development experience for BI projects in general is different from application development and database projects and you cannot use the same management tools – at least not in the same way. He promoted using OneDrive for Business to manage version control.

He shared several excellent resources, many of which I either use or have evaluated, to help manage Power BI projects. The ALM Toolkit is a useful tool for comparing objects in great detail between two PBIX files. Ásgeir also show some efforts from community contributors to automate change-tracking file-level source control (which really made the point that it’s a difficult thing to do with Power BI). We know that Microsoft are working on an integrated release management solution for the Power BI service which may amend or replace the need for existing tools.

Guidance for publishing and management the life cycle for Power BI solutions included deployment automation using OneDrive and PowerShell. Using multiple workspaces for development, testing and pre-production; deployment can be managed using the Power BI REST APIs and PowerShell script, which is detailed in this tutorial. These PowerShell examples demonstrate how to clone workspace content, publish files and rebind data sources.

Regarding governance and security, he made reference to the extensive Microsoft whitepaper: Planning a Power BI Enterprise Deployment. He steps-through diagrams that help simplify each of the important processes and tasks for developing, deploying and managing Power BI solutions.

If you need to manage Power BI solutions, I encourage you to review his presentation and you can connect with Ásgeir on LinkedIn and Twitter.

Setting up Power BI project Team Collaboration & Version Control

Team file sharing and version control may be managed through Power BI workspace integration with OneDrive and SharePoint team sites. In this post, I will show you how to get started and setup a new project.

In this demonstration, I’m going to show you how to set up a workspace and a team collaboration site so that you can share Power BI files among members of your team and manage version control. In another post, we’ll discuss the nuances and the differences in version control, build processes and DevOps comparing a Business Intelligence project with an application development project – and how those are different experiences. An important lesson is to learn how to work with this tool the way that it was designed to be used rather than to try to force Power BI to work with incompatible build management and version control tools. Power BI, along with companion technologies like Analysis Services and Reporting Services, can participate in an application life cycle (ALM) and continuous integration (CI) framework but sometimes not in the way we might imagine. This post addresses a piece of that larger puzzle.

Get Started in the Office Portal

If your organization uses office 365, you have all the tools that you need. First thing is to go to the Office Portal at office.com. Click on the App menu and choose “SharePoint”. Here, we’re going to create a new site. I’m going to choose the Team Site template and this is going to be for my Contoso Sales project. We’ll give the site the name: “Contoso Sales Project” and click “Next”. The only other thing I really need to do here is add some users as owners to this site. You can add an office group or you can add users later on. I’ll go ahead and add a couple of users and we’ll finish so that creates the site in SharePoint Online. Here, you see my new site. What happens is in Azure Active Directory, a new group is created for this site and we don’t need to do anything extra. You’re going to see that here in the Power BI portal.

Create a New Workspace and Assign a OneDrive Group Alias

I’m going to create a new workspace for development. Depending on the formality of your project, you could have a DEV a TEST and a Production workspace. You can create one or perhaps two workspaces for a project. I’ll postfix the name to indicate that this is going to be my DEV workspace. In the Workspace Settings, I’m going to choose the Contoso Sales Project group that was automatically created for my site as my workspace OneDrive group. That gives every member of that group access to this shared space and gives me the ability to sync files with this workspace.

Add a Folder and Setup Sync

Let’s go back to SharePoint Online and view the team site. Since the site was just setup, you’ll see that it shows up at the top of the most recent list. The next thing that I’m going to do is add a folder that I can sync with my desktop so I can share files with other members of my team. We’ll go to the Documents library within the new site and here I’m going to add a new folder. I’ll click new and folder and we’ll give the folder a name so this is going to me my Contoso Sales Project Files folder. We’ll create that folder and then the next thing I’m going to do is configure that folder to sync with my desktop. Each member of your team can do the same thing which will enable them to put copies of the files within this synched folder onto their desktops. You always want to use Power BI Desktop to work in a local file folder. So, there’s my my synced folder on my desktop. It’s probably a good idea to add that to my Windows Quick Launch bar or add it to my Favorites so that I can get to it quickly.

I’ll put a PBIX file into this folder and in the future, I’ll go to do all of my Power BI development work. I have a file from an existing project that I’ll use for simplicity sake. I’m just going to add a .PBIX file and designate this is my dataset file. In another post, I’ll talk about separating datasets from reports – which is something that I routinely do, especially in larger more formal projects. I’ll go ahead and just copy and paste a .PBIX file and then rename that file eventually I’ll have one dataset .PBIX file that will contain all of my data, and that’s what this file is for. It currently contains a report with pages in it. When I make a change to that file, it immediately synchs – which you can see in the OneDrive app. If you just click on the OneDrive icon, you’ll see that that will sync-up. This is a fairly small file so it goes pretty quickly.

Publish a File from the Synchronized Folder

Back to the workspace: I’m on the Datasets page and I’m going to click “Get Data” which takes us to a page where we can see the Files tile. I’m going to click “Get” on that tileand that’s going to show me that there’s a new OneDrive tile with the name of the new team site. I’ll click that and it takes me to the team site where I’ll see the folder that I had created. I click that and now I see the .PBIX file.

When I choose “Connect”, that actually imports the file as a new dataset in the service. Using this method to publish the dataset will keep that file synced up and any changes I make will automatically be deployed. There is no more need to deploy updated dataset or report files from Power BI desktop. This file will remain synced-up all of the time. I can set-up a gateway, schedule refresh and all of the other things that I would normally do with a Power BI file as if I had deployed it from Desktop. Using this method, it’s going to remain synched-up through this SharePoint Online folder, which is managed through the OneDrive For Business application.

Testing and Demonstrating Synchronization

I’m going to open this file in Power BI Desktop and just make a simple change and save the file. We should see that it automatically gets synched-up. I’ll do that a couple of times and then we can go take a look at the automatic version control that takes place through SharePoint Online. OneDrive For Business is actually SharePoint Online under the hood. When you create a team site, it essentially creates a OneDrive For Business folder. That’s how all of that magic happens.

Incidentally, while synching the file – as you can see, I just got an alert and an invitation from SharePoint that welcomes me to the site. Each member of your group will automatically get an invitation like this with a link to the folder that I’ve shared with them. I’ve changed the folder view to large icons and you can see the little green check box icon that shows that that file is synced-up with my desktop.

Make sure to always open the .PBIX file from the synced folder on your computer rather than from the SharePoint site through your browser. If you make a change to the file in Power BI Desktop, you’ll see the changes in seconds to minutes, depending on the file size. It doesn’t really matter what kind of change I make… I’ll just go to the data model diagram view and make a change… I moved a table around (imagine that I’m adding a measure or adding a relationship, adding into the table anything that would constitute a change)… this flags this file and causes me to be prompted to save it. Now we go back to File Explorer and notice the little sync icon which I’m going to get for a few seconds in this case. Because it’s a small file, now it’s synced-up again.

Edit Local Files in the Synchronized Folder

After making changes, always close Power BI Desktop so the file isn’t locked by the application. That releases the file so that it can be locked momentarily by the OneDrive process and synched-up with the online service. You can watch the progress here in the OneDrive app. If this were a large file, you’d see a little progress bar and it might take a couple of minutes to synchronize. All members of your team will see the new file here in the file system afterward.

Using Version Control and Team Collaboration Features

From File Explorer, choose the “View Online” menu option to go to the SharePoint library in your browser. You can use the ellipsis menu next to the file name to check-out the file. This prevents changes by other team members while you have it locked. You can use the “Version History” menu option like I’m doing here. You can see that every single change generates a new version of the file. If I have admin access – as I do here – I could choose a version and restore or revert back to that version. You have complete control here so this gives us a really good version control story for Power BI and it gives us the ability to collaborate as members of a team.

Next Level Power BI – M Language Basics

This is a series of posts and exercises from a recent full-day workshop called Next Level Power BI. Check back for additional lessons and freely post your questions and feedback in the comments section after this post.

Power Query is a modern data acquisition and transformation technology and it is the “Get Data” component of Power BI and many other Microsoft tools. The query script language of Power Query is known as “M” or the Microsoft Data Mashup language. The menu options and selections in the Power Query Editor interface build this “M” script for you but to use Power Query at an expert level, you must understand fundamental concepts of the language. The post will step you through some essential mechanics of the language to build the necessary fundamental skills.

What you learn here can be applied to the Power Query implementation of other Microsoft products, which currently include:

  • Power BI Desktop Query Editor
  • Power BI Data Flows
  • Excel “Get & Transform” on the Data ribbon
  • SQL Server Analysis Services (SSAS) Tabular projects in SSDT/Visual Studio, or Azure Analysis Services.
  • Azure Data Factory: Power Query task

By the time you read this, there are likely to be more implementations of Power Query.

Basic Syntax

A quick review of the characters that are used to define or encapsulate groups of objects will help to better understand and recognize what you are seeing in the following code examples.

Parentheses, square brackets and Set braces all have different meanings. For references, here they are in the same order:

( ) – Parentheses are used to pass parameters to a function and to control mathematical order of operations.
[ ] – Square brackets encapsulate a set of records.
{ } – Set braces encapsulate a list of values.

Most functions in M are part of an object class (such as Date, Table or Text) and the syntax is something like this:

  • ObjectClass.Function( ),
  • ObjectClass. Function( Parameter ),
  • ObjectClass. Function( Parameter1, Parameter2 )

Examples:

  • Date.AddYear ( <date value> )
  • Table.FromList ( <list object> )
  • Text.End ( <text string>, 2 ) – returns last two characters from right of string

A few older functions that were part of the initial release of Power Query are not prefixed with the object class name. For example the #date( ) function has three parameters: Year number, Month number and Day number.

Any word may be used as a variable. Common examples include step names like Source. Variables are often multiple words containing no spaces, such as ServerName. You are strongly encouraged to rename steps with short but meaningful phrases to make them self-documenting and easy to understand.

Are you with me so far?

The Mighty #shared Function

This little function is literally the key to the entire kingdom and will return every Power Query object defined in your file along with every available M object class and function in the entire language along with complete documentation and code examples. This is the best kept secret in Power Query.

Add a new blank query and type the following into the formula bar, and watch while magic things happen:

=#shared()

The results will be returned as a list of objects and functions. Convert that to a table and you can view the definition of several hundred objects and functions. The M laguange contains nearly 900 functions and more are frequently added to new product versions.

Click the cell next to the link for any function to peek at the definition or click the link to explore related details.

Getting Started

The following examples are hand-written queries entered into the Advanced Editor in Power Query. None of these queries have an actual data source. Values are generated within the code itself but all of these examples can be applied to real queries based on any data source.

Each of the following queries adds a little more to the previous query to explore progressively more advanced objects and code techniques in the M language.

If you create a new blank query in Power Query, it will generate code like this:

let
Source = “”
in
Source

This is the fundamental pattern to get you started.

Values

Let’s modify the script to give the Source variable a text value of “This is a value” (creative, I know).

Use the Advanced Editor to view the generated M script.

Here’s just the script outside of the Advanced Editor:

//——————————————————-
//#1: Object Types: Value
//——————————————————-
let
Source = “This is a value”
in
Source

Use the Done button to close the Advanced Editor to see the result.

Unless they are explicitly data-typed, variable object and data types are automatically assigned, which you will see in the following examples.

The Source variable represents a value type object that is implicitly typed as text. You can see the resulting value in the query designer:

Records

In M, records are encapsulated in square brackets. This script generates two single column records.

//——————————————————-
//#2: Object Types: Record
//——————————————————-
let
Source = [Record1 = “Value 1”, Record2 = “Value 2”]
in
Source

The result is two single column records:

Lists

The List object in M is denoted using Set braces, sometimes called “squiggly brackets”. The following script creates a list of four text-type values:

//——————————————————-
//#3: Object Types: List of Values
//——————————————————-
let
Source =
{ “Value 1”, “Value 2”, “Value 3”, “Value 4” }
in
Source

The list is shown below. Note the column header in the preview grid reminds us that this is a List object. Also note that the designer shows the List Tools ribbon with options to convert the List to a Table, because that’s what we normally do in Power Query to be able to consume List data.

Now, we can start to combine these concepts and techniques. The following query script creates two records, each containing two columns with corresponding values:

//——————————————————-
//#4: Object Types: List of Records
//——————————————————-
let
Source =
{
[Column1 = “Value 1”, Column2 = “Value 2”],
[Column1 = “Value 3”, Column2 = “Value 4”]
}
in
Source

The designer shows two records as links. This is because these are not consumable objects. To use them, you can either convert them to a table or expand one record by clicking the link to view the values.

To peek inside a record without adding a new query step, you can click in a cell but not on the link and the value will be displayed in the preview area at the bottom of the screen.

The first time you see a link presented in this way, you are likely to click on it to see what happens. No damage done, but this will generate a step that produces a single record and you will lose sight of the entire list. To go back, just delete the generated step in the Applied Steps list.

Tables

If you use the Convert to Table button on the toolbar, that will add the steps you see in this query.

The Table.FromList function is a good example of several similar functions that serve a similar purpose. As you look through all the functions within the different object classes, you will notice several different “From” and “To” functions that convert an object from one type to another type. These are both easy to use and they are easy to identify and to understand their purpose. This one should be obvious: As the FromList function in the Table class, it converts a List type object to a Table. Another example, the ExpandRecordColumn in the Table class expands a column containing delimited values into separate columns.

//——————————————————-
//#5: Object Types: Table
//——————————————————-
let
Source =
Table.FromList(
{
[Column1 = “Value 1”, Column2 = “Value 2”],
[Column1 = “Value 3”, Column2 = “Value 4”]
},
Splitter.SplitByNothing(), null, null, ExtraValues.Error
),
#”Expanded Column1″ = Table.ExpandRecordColumn(Source, “Column1”, {“Column1”, “Column2”}, {“Column1”, “Column2”})
in
#”Expanded Column1″

The result of these steps produces the results you see here. We began with a list of two records, each having comma-separated values. After the conversion steps, we now have a Table object with two rows and two columns with values and headers.

Transforming Each Row

When you use the Transform ribbon or menu options to perform most any transformation on a column, you are in essence saying “I want to perform this action on the specified column in each row of the table.” The key word in this statement is “each” and this is an important differentiation in the following M code that was generated when extracting the last character of the column named Column2.

An example of this generated query is shown in Query #6 below.

//——————————————————-
//#6: Object Types: Transform each row
//——————————————————-
let
Source =
Table.FromList(
{
[Column1 = “Value 1”, Column2 = “Value 2”],
[Column1 = “Value 3”, Column2 = “Value 4”]
},
Splitter.SplitByNothing(), null, null, ExtraValues.Error
),
#”Expanded Column1″ = Table.ExpandRecordColumn(Source, “Column1”, {“Column1”, “Column2”}, {“Column1”, “Column2”}),
#”Extracted Last Characters” = Table.AddColumn(#”Expanded Column1″, “Right Char”, each Text.End([Column2], 1), type text)
in
#”Extracted Last Characters”

Let’s isolate just that step and examine the M code in the last line of the query:

#”Extracted Last Characters” = Table.AddColumn(#”Expanded Column1″, “Right Char”, each Text.End([Column2], 1), type text)

The ‘#”Extracted Last Characters”‘ variable is being assigned the outcome of The AddColumn function. We know that the return object type is a table because of the object class Table always returns a Table type object. The AddColumn function takes three parameters:

  • A table type object (usually the name of the previous step variable)
  • Name of the new column as text
  • An expression that returns a value for the new column

The last parameter argument in this case is the output of the Text.End function. You might guess that this is going to be a text type value. The End function, similar to the RIGHT function in Visual Basic, returns a specified number of characters from the end of a text string. But, since Column2 exists in every record in the table returned by the ‘ #”Expanded Column1″ ‘ variable, how do we specify that we want to do this for each record, one-at-a-time? …go got it. We use “each“.

Note to the Right Char column in the following results. It is the right-most character from Column2:

Transforming Columns In-place

Rather than creating a new column containing a value extracted from a previously created column, you can also modify an existing column. This is done using the TransformColumns function of the Table class. Again, we’re looking at the last step of the following query that was generated by using the Extract menu on the Transform ribbon or menu:

//——————————————————-
//#7: Object Types: Transform each row col in place
//——————————————————-
let
Source =
Table.FromList(
{
[Column1 = “Value 1”, Column2 = “Value 2”],
[Column1 = “Value 3”, Column2 = “Value 4”]
},
Splitter.SplitByNothing(), null, null, ExtraValues.Error
),
#”Expanded Column1″ = Table.ExpandRecordColumn(Source, “Column1”, {“Column1”, “Column2”}, {“Column1”, “Column2”}),
#”Extracted Last Characters” = Table.TransformColumns(#”Expanded Column1″, {{“Column2”, each Text.End(_, 1), type text}})
in
#”Extracted Last Characters”

Here is the last step isolated:

#”Extracted Last Characters” = Table.TransformColumns(#”Expanded Column1″, {{“Column2”, each Text.End(_, 1), type text}})

Instead of the column name being passed to the Text.End function, we see an underscore. This is just a shorthand reference that means “use the column currently being transformed”.

Creating a Table from Records

Now back to completely hand-written code… This one’s easy:
By default, new columns are data-typed as “Any” which is essentially an unclassified data type and this is not typically a best practice. Columns should be strongly typed so that grouping, aggregation and DAX functions work correctly in the data model.

In the following query, we apply the TransformColumnTypes function from the Table class. The function requires a table and at least one column name and data type reference. You can pass any number of columns to this function, each encapsulated with set braces, and with another set of braces around the whole thing.

Here, you can see the results: one table with headers, two columns and three records. This all looks good.

Creating a Table from a Set of Records

Now, we’re going to start putting things together by using object type variables. I think you’ll agree that the code is much easier to read.

And we see similar results as before but with a much cleaner approach and readable code.

Extracting a Record from a Table

As a final assignment, I need to get only one record so I essentially need to unwind the objects that were used to assemble this table and get only one record from it. I want lemons so I need to figure out what the ordinal record number is within the table’s record collection.

Since there are three records, there is automatically an ordinal (index) number assigned to each. Sometimes you just have to guess if you don’t already know, if the first record starts at zero or one. Turns out that all these collections are zero-based so if you want to identify the second record (for Lemons) in the table, it’s going to be ID = 1.

Here’s the syntax:

let
Rec1 = [Product ID = 123, Product Name = “Oranges”],
Rec2 = [Product ID = 124, Product Name = “Lemons”],
Rec3 = [Product ID = 125, Product Name = “Apples”],

MySet = { Rec1, Rec2, Rec3 },

MyTable = Table.FromRecords( MySet ),
MyLemons = MyTable{1}

in
MyLemons

SO what’s all the fuss… what does this actually do?

I’m glad you asked. Let’s take a look.

Isn’t she a beauty?

It’s the best looking lemon I’ve seen all day.

Now you’ve seen the mechanics of how M language functions can be used to perform very simple value manipulations and transformations. More importantly, you have learned about the relationship between important objects like Values, Records, Lists of Values and Lists of Records, oh – and let’s not forget – Tables. That’s the one that actually does stuff in a data model,

I hope this has helped you to broaden your horizons with Power Query and the M language. Please comment with you feedback.

Chat with Donald Farmer at PASS Summit 2019

Enabling the modern data analyst by becoming shopkeepers rather than gatekeepers of information

I appreciated the opportunity to sit down and talk with one of my heros in the industry. Donald Farmer has been a recognized leader in the Business Intelligence and Analytics community for the past two decades and his thought leadership is even more relevant today. He played a key role in shaping the Microsoft BI toolset as a Principal Program Manager from 2001 when the products we use today were in their infancy. He’s also been outside of Microsoft circles which gives him a broad and unique industry perspective. As the Vice President of Innovation and Design at Qlik Technologies, he brought Qlick Sense to market. He remains a friend and advocate for the community. Donald holds multiple patents and has published multiple books. Today he is doing his own thing as the Principal of TreeHive Strategy.

The chat was 9:41 long and is worth watching every second. I have added captions to highlight a few key points which are also listed below. Please enjoy.

Key Points

Route Knowlege is a view of information at a micro level, like turn-by-turn directions. This might be important when you only need to answer a single question without a broad understanding of the entire landscape.
By contrast, Survey Knowlege is a view of information in the context of the bigger picture. Someone with Survey Knowlege knows “how the whole thing hangs together“.

Serving the needs of the modern Business Data Analyst requires a change in attitude. We should become provisioners of information rather than protectors of information. In other words, become shopkeepers rather than gatekeepers.

Actionable data is not always detail data and consuming a summation can not only be easier but more useful than getting lost in the details and the accuracy or precision of a number. Drilling to detail in context is a natural part of the exploration process. He talked about a pattern in the process to provide actionable business information, which includes these stages:

  1. Orientation
  2. Glimsing
  3. Examination
  4. Acquisition
  5. Action

I appreciate Donald taking the time to both speak at PASS Summit and to chat with me for this blog series. Please watch my blog for additional interviews and insights from PASS Summit 2019

Next Level Power BI – SVG Databars

Power BI table and matrix visuals can render SVG object elements in a column or measure to display in-line web content. You can build dynamic content like KPI indicators, databars and sparklines by altering SVG element content using DAX expressions. In this post, I will show you how to create databars that show negative and positive values with different color fills based on category values. Of course, Power BI has built-in databars in the conditional formatting options for numeric columns but you cannot alter the color and other attributes dynamically – at least not yet. For the time being, this technique provides more flexibility than the built-in databar feature.

Read This First

In this post from earlier this week, I introduced a simple technique to produce symbols and graphics using columns or a measure to render SVG images in a table visual. I recommend that you read that post to understand the simple implementation of this technique.

Dynamic Databars

Instead of a graphic symbol, the databar really just consists of a rectangle that has a variable start and end position. I want to change the fill color layer based on whether the value is negative or positive, and based on a category value in my data. You could apply any imaginable business logic to change the color or other attributes. I also want to layer formatted text over the top of the databar rectangle. Check out the example below showing a series of records. My simplified demo dataset causes the same value to repeat across the four Flintstones characters but this Databar measure could be applied to any set of data. Notice that negative values are red and the bar “starts” at the center and is proportionally longer as it extends to the left of center for larger negative values. The same is true for positive values that start in the center of the cell and move to the right.

Before we examine the code to generate the dynamic databar, let’s look at the rendered SVG element for a couple of these cells. If I hover over the cell for ID 1 and “Barney”, the SVG definition is displayed in the tooltip.

One fun thing about writing this blog post and showing you fragments of HTML script is that it is a little challenging to copy and paste the script into this post without it being interpreted and rendered as a graphic. This is one reason I’ve elected to use screen capture images.

Examine the above script and you will see the “<rect” element opening tag followed by attributes that specify the x axis, y axis and height values. These attributes draw the rectangle starting at 79 pixels from the left-most edge of the viewBox (left side of the cell), 80 pixels tall and 181 pixels wide. Keep in mind that the graphic does get rescaled with the table so these are only relative measurements. The fill property of the style attribute fills the rectangle with the color Red.

There is also a “<text” element with similar attributes that place centered (anchored to the middle of the viewBox) text at the mid-point of the viewBox width of 300 pixels. For simplicity, there are a few attributes I’m not mentioning but it’s fairly simple to recognize them and figure out what’s going on.

Hovering over the cell for ID 9 and “Barney” with the value 1495, the x and y axis values cause a DodgerBlue filled rectangle to start at 260 pixels and to be 236 pixels wide.

Show Me the DAX

The DAX code for the DataBar measure is below. The actual expression code is fairly simple but it took a little trail-and-error to work-out the measurements and scaling logic. Rather than building the expression in-line, I’ve invested work up-front to put all the values into variables. First, we get the minimum value for all the selected ID rows (using the MINX iterator function with the ALLSELECTED filtering function) and store that in the MinValue variable. The MaxValue variable is similar with the added condition to handle negative or positive values. This may all seem a bit verbose but its really just math.

There is a disconnected table in my sample data model named Flintstone Characters with character names that I place onto the Columns axis of the matrix visual. In a real-world solution, this could be a field like product categories, geographic regions or anything else. The BarFill variable is set to a color value (which could be either a web color name or Hexadecimal value) based on the selected character name. When grouping a visual on a value, the SELECTEDVALUE function will return a single value. After resolving all the variable values, this measure emits the SVG markup similar to the examples shown previously.

DataBar =
VAR MinValue = MINX(ALLSELECTED(‘Values'[ID]), calculate(SUM(‘Values'[Value])))
VAR MaxValue = MAXX(ALLSELECTED(‘Values'[ID]), calculate(SUM(‘Values'[Value]))) + IF(MinValue<0, ABS(MinValue), 0)
VAR BarValue = SUM(‘Values'[Value])
VAR FormattedValue = FORMAT( BarValue, “#,##0” ) –“#,##0.00” –“#,##0.0 %”
VAR svgWidth = 600
VAR svgHeight = 200
VAR svgFontStyle = “normal 70px sans-serif”
VAR BarHeight = 80
Var BarRange = ABS(MinValue) + MaxValue
VAR BarValueFactor = DIVIDE(svgWidth, BarRange)
VAR BarWidth = INT( ABS(BarValue) * BarValueFactor )
VAR BarZero = INT( DIVIDE(ABS(MinValue), MaxValue) * svgWidth )
VAR BarStart = IF(BarValue < 0, BarZero – BarWidth, BarZero)
VAR BarOpacity = “0.5”
VAR BarFill = IF( BarValue < 0, “Red”,
SWITCH( SELECTEDVALUE(‘Flintstone Characters'[Character]),
“Fred”, “DarkOrange”,
“Wilma”, “Purple”,
“Barney”, “DodgerBlue”,
“Betty”, “DarkGreen”
)
)
VAR svg_start = “data:image/svg+xml;utf8,” VAR svg_end = “”
VAR svg_rect = “”
VAR svg_text = “” & FormattedValue & “”
// x=(half of width width when text is centered)
RETURN
svg_start & svg_rect & svg_text & svg_end

download this code

Like in the Symbols example in the earlier post, you must select the measure and set the Data Category in the Modeling ribbon to Image URL. After that, the measure can be dropped into a matrix visual grouped on the ID and Character fields, to produce the databar visuals I showed in the first example above.

It Takes a Village

A few experts in the community have created SVG sample projects with various techniques. David Eversveld has blogged about generating SVG images in this series. When I started working out the code for the DataBar technique, I reached out to some friends in the MVP community and Darren Gosbell was kind enough to help-out. In the process Darren discovered a bug (or an undesired behavior) in Power BI that ignores the aspect ratio directive in the SVG element and draws a square image even when you don’t want it to. He posted a suggestion the Power BI Ideas forum and I encourage you to go vote to help get this fix prioritized by the Power BI product team. Before this is corrected, the technique is useful but will take-up more vertical space than you might prefer.

Creating in-line Sparklines

On a related topic, Brian Grant has developed a method to produce SVG trend line charts, commonly known as “sparklines”, using a creative string replacement technique. You can find a tutorial for his solution here.

download sample PBIX file

Next Level Power BI – SVG Symbols

With so many different methods to visualize data – just within Power BI – this can be a daunting choice. Let’s see… there are a plethora of standard visuals that can be used in creative ways. A software developer can create a custom visual if they have the time and chops for that kind of thing. You can use a custom visual generator tool like Charticulator or the Chart designer from PowerBI.Tips. And, you can render unique visuals for statistical analysis using R or Python. The technique I’m going to demonstrate in this series doesn’t require a lot of work or special skills so it is relatively simple and easy to use.

In this first post in a series called “Next Level Power BI”, I demonstrate how to create a simple graphical symbol as a Scalable Vector Graphic – just to get started. In a later post, we will expand the technique to create dynamic databars. The same technique can be used to create icons, sparklines and practically any imaginable graphic can be produced with calculated column or measure.

I’ll start with a simple example. Let’s say that you need to display a set of symbols to correspond to a data condition of some kind. For simplicity, let’s just start with the graphics. Here are examples of three symbols displayed in a table visual on a Power BI report page:

Finding the SVG markup for these symbols can be as simple as a web search. You can also create your own using vector graphic editing tools like Adobe Illustrator. The markup for these three symbols is shown here in Power Query. In different Power BI solutions, I have stored and exported the SVG markup in Excel. It’s just text so you can store it anyway you prefer. All three symbols are defined as a series of spatial points in the path element but you can draw shapes, text and a variety of things. There are several good SVG references on the web but I found this collection from Jakob Jenkov to be particularly useful: http://tutorials.jenkov.com/svg/index.html

In this solution, I just used the Enter Data feature and then copied and pasted the SVG markup directly into the query – one row per image.

We’re not quite done because Power BI requires the following header element to be appended to the beginning of each graphic:

data:image/svg+xml;utf8,<?xml version=”1.0″ encoding=”iso-8859-1″>

You can approach this a few different ways but I added a custom column named Definition and then removed the original column using this expression:

After closing and applying the query, to render the SVG image which exists in the Definition column within a report; it must be categorized as an Image URL. In the report designer, select the column from the Field list and then choose the Data Category on the Modeling ribbon.

The column is now usable as an image in a table or matrix visual, which should appear like the first screen image above.

To create a measure to display only a single image, you can create a measure like this one that returns only one value from the column:

Spreadsheet Symbol =
CALCULATE( MIN( ‘Images'[Definition] ),
‘Images'[Image Name] = “Spreadsheet”
)

The measure must also be categorized as an Image URL and then it can be used as an individual symbol or image in a report within a table visual.

That covers the bare bones basics of SVG image columns and measures. There is so much that you can do, expanding on this technique by injecting dynamic logic into the measure code. I’ll cover that in a subsequent post.

Next-Level Power BI – SVG DataBars

So, I wrote a bedtime book now?!

Is the 2010, first edition of my SQL Server Reporting Services Recipe Book so snooze-worthy that it is now being used as a bedtime story? Huh uh!

O’ contraire, This book contains so much of what real life has to offer. So much insight and deep, deep learning opportunities that instruct, educate, train and prepare an industry professional to take their report designs to new heights, to inspire and to elevate the reader by using report design techniques never attempted before. All on a rock-solid foundation of tried-and-true report design best practice techniques that will totally rock your world and change the way you think about reporting! These are the sort of insights contained within the pages of this exciting book about SSRS report recipes.

So, what is this business about Paginated Report Bear wanting to read my world-changing book as a bed time story? Really… well, that’s not going to work!

Oh, Chris (Finlan), my friend… why oh why do you torment me like this?

Matthew, we need to talk about your Dad and his wayward little stuffed friend!