What I’m going to show you is not rocket science. This one gets filed under the category of “well, duh. That was easy …now that I see how to do it.”
Power View sets the maximum value for a chart series based on the maximum unfiltered axis value; pretty simple but not too flexible. Let’s say that I want to show the progress of project completion. If I add a percentage type measure to the Value of a column chart with no fields on the Axis, it will plot a single column “gauge” chart with the maximum value (in this case 65%) somewhere near the top of the chart. I suppose that’s what we asked it to do but there is no way to manually set the maximum range of the axis scale. Oh, what to do?
Like most beyond-the-basics capabilities in Power View, the answer is to enhance the semantic model – and this one’s easy. In the PowerPivot or SSAS model, create another measure. I’ll do it right next to the existing Percentage calculation named “Project Pct Complete”. The objective is to get the gauge visual (our no axis column chart) scale to add up to 100% so what we need is another measure that when added to the first one, adds up to 100%. I’ll call that calculation “Remainder Pct Complete” using the DAX expression:
By subtracting the % Complete from the number one, the result will always be the remainder of 100%. I’ll format the new measure as a percentage.
Now I add both measures to the chart/gauge like so:
I haven’t quite got the legend to work the way I want it to. In this case, I’d probably just hide and and maybe show data labels for the two series values. That’s all up to you, of course. For extra credit, let’s add another progress gauge that shows the overall progress even when a slicer is used to filter the data. First of all, test this. I’ve added a Priority slicer and selecting one or a combination of items slices the project rows and changes the progress gauge value.
Back in the model, I add two more measures starting with the existing expressions. You can just copy one of the measure cells and paste it into an empty cell. This must be done on-at-a-time. The expression is enhanced using the CALCULATE and ALL functions, like this:
This says “Apply the Project Pct Complete measure calculation to whatever filter context is provided in the CALCULATE function. In this case, we are says “ignore the natural row or filtering context of the query and always apply the calculation to all rows in the Projects table.”
Now, when a slicer or filter is used on the report, the Overall gauge continues to show the same percentage while the rest of the report, including the Progress gauge shows only filtered values.
A client came to me with a challenge this week that, although simple in concept, posed an interesting challenge that I hadn’t solved before using a tabular model and DAX. She had cost information for several hospitals in a PowerPivot model. Using an Excel pivot table published to a SharePoint site, users were to select any combination of hospitals using a slicer and compare relative costs for these facilities and other measure values. After working on the problem and have two working solutions. I don’t think one is necessarily better or worse than the other but each have advantages over the other. Thanks to Rob Collie and Marco Russo for their contributions as we’ve discussed potential solutions.
In production, the model has multiple tables and a lot of cost information but the essence of the problem is represented in this sample:
A fact table contains cost information and a HospitalID key, which is related to a lookup table of hospitals. Now the challenge is to allow users to select two different hospitals and return the name and related measure values for the selected hospitals so we can perform comparisons on the same row. Later, in production, users will select 3 and maybe up to 6 hospitals for comparison. My example only shows two. My first thought was to just add the hospitals to rows in the PivotTable before the measures but they have some specific statistical calculations to perform with a group of selected hospitals that can’t be done if these are rows in a PivotTable.
Solution #1: One slicer per selection
I added two copies of the Hospital table to the model without any relationships. These are “disconnected tables” meaning that including these in queries will have no effect on filtering data in other tables unless used explicitly in calculation. They will be used to drive slicers and filters, and used in special calculations.
In an Excel PivotTable or Power View report, a slicer is created based on each of these two tables.
Calculated measures are added to the fact table to calculate and filter the existing Total Cost measure. Of course, similar calculated measures could be created to calculate any other measure as well.
Selected 1 Total Cost:=CALCULATE([Total Cost], FILTER(Cost, [HospID]=MIN(SelectedHospital1[HospID])))
Selected 2 Total Cost:=CALCULATE([Total Cost], FILTER(Cost, [HospID]=MIN(SelectedHospital2[HospID])))
I used the MIN function only to make the expression syntactically correct and because the HospID field in the fact table can only be compared to a single value in the lookup table. Functionally, this doesn’t really do anything if the user to chooses one value in the first slicer. If they happen to select more than one item, this will use the one with the smallest HospID value. This technique works fine with numeric values but aggregate functions can’t be used with text type fields. To return the selected hospital name in another calculation, I used the FIRSTNONBLANK function. Notice the required second argument, normally used to filter values in a field. In this case, I just used an expression that always evaluates to True:
A simple calculation is used to calculate the difference between the first selected item cost and the second selected item cost:
Selected Diff Total Cost:=[Selected 1 Total Cost]-[Selected 2 Total Cost]
In an Excel PivotTable, I add a slicer for each of the two lookup tables and the calculated measures for the values. The final product is easy to use. We select any hospital from the first slicer and any hospital from the second slicer to compare costs between them:
Solution #2: One slicer to select multiple items
This solution is a little more elegant in my opinion. I need to give credit to Marco Russo for working out the DAX calculations for this. He has a gift for looking at a problem differently and finding a unique solution. Rather than using one table for each selected facility, one table is used to drive a multi-select slicer:
If this were a multidimensional model, the problem could be addressed in MDX by using the ITEM property for a set of members. For example, to return the second selected hospital in a calculated member expression, an expression like this could be used: [Selected Hospital].[Hospital Name].ITEM(1)
…but DAX doesn’t work that way. We think in terms of Rows and not Sets and there is no Items collection.
To get the first selected hospital, we can use the TOPN function to get the first item in the set of rows returned by this table. This expression calculates the total cost for that item:
The TOPN function needs 4 arguments: the first tells it how many rows to return (in our case only 1… the first row), the second is a table (the VALUES function turns a bunch of field values into a table), the third is the Order By expression, and the fourth argument specifies descending or ascending sort order.
To return the second selected item…
Since DAX has no way to just go get a specific row by row number or ordinal position, now we have to get really creative. Marco’s solution is to use two different TOPN expressions. The inner expression gets the first two rows and returns the results in descending order, using the expression: TOPN( 2, VALUES( SelectedHospitals[HospID]), SelectedHospitals[HospID], 1 ). The outer TOPN expression just returns the first row from that result:
Power BI: a new suite of Business Intelligence tools
Over the past few months, teams at Microsoft have made several new Business Intelligence tools available for preview; some only privately and some to the public. The entire suite will soon be available for either public preview or release under the new name: “Power BI”. All of the components of Power BI are listed below but the big news is a new hosted offering called “Power BI for Office 365” and “Power BI Sites”. The announcement was made at the Worldwide Partner Conference this week. Users can sign-up to be notified when the new offerings are available for general availability, apparently in the very near future. I’ve had an opportunity to work with early, pre-released versions and it has been interesting to see the gaps being filled a little at a time. On the heals of the new suite, some of the names of existing products are also being changed. It’s hard to have a conversation about the collection of Microsoft’s “Power”/”Pivot”/”Point”…named tools and not get tongue twisted but these changes bring more consistency.
Bottom line: this is good news and a promising step forward – especially for smaller businesses. Larger, enterprise customers should know that this move is consistent with Microsoft’s “cloud first” philosophy and these capabilities are being introduced through Office365/Azure platform with required connectivity. Read the commentary on community leaders’ sites below. I have no doubt that there will be a lot of discussion on this in the weeks to come with more announcements from Microsoft in the near future.
When Power View was released with SQL Server 2012 Enterprise and Business Intelligence Editions, it was available only when integrated with SharePoint 2010 Enterprise Edition. This is a good solution for enterprise customers but it was complex and expensive for some to get started. Power View was also offered only as a Silverlight application that wouldn’t work on many mobile devices and web browsers. For this reason, Power View has really been viewed as a “Microsoft only” tool and only for big companies with deep pockets and very capable IT support groups. Even the new Power View add-in for Excel 2013 ProPlus Edition requires Silverlight which is not a show-stopper for most folks but a hindrance for multi-platform and tablet users. This all changes with this new offering as the Power View visualization tool in the hosted product come in 3 new flavors: native Windows 8 app (runs on desktop, Surface RT & Pro), native iOS (targeting the iPad) and HTML5 (works on practically any newer device). This means that when you open a Power View report on your Surface or iPad, it can run as an installed app with all the cool pinch-zoom and gestures you’ve come to expect on a tablet device. For now, this is good news for the cloud user as no on-premises option is currently available. An interesting new edition will be the introduction of a semantic translation engine for natural language queries, initially for English.
Formerly known as “Data Explorer”, this add-in for Excel 2013 allows you to discover and integrate data into Excel. Think of it as intelligent, personal ETL with specialized tools to pivot, transform and cleanse data obtained from web-based HTML tables and data feeds.
This Excel 2013 ProPlus add-in, which was previously known as “GeoFlow”, uses advanced 3-D imaging to plot data points on a global rendering of Bing Maps. Each data point can be visualized as a column, stacked column or heat map point positioned using latitude & longitude, named map location or address just like you would in a Bing Maps search. You can plot literally thousands of points and then tour the map with the keyboard, mouse or touch gestures to zoom and navigate the globe. A tour can be created, recorded and then played back. Aside from the immediate cool factor of this imagery, this tool has many practical applications.
The be reveal is that “PowerPivot” shall now be known as “Power Pivot”. Note, the space added so that the name is consistent with the other applications. We all know and love this tool, an add-in for Excel 2010 and Excel 2013 ProPlus (two different versions with some different features) that allow large volumes of related, multi-table data sources to be imported into an in-memory semantic model with sophisticated calculations. On a well-equipped computer, this means that a model could contain tens of millions of rows that get neatly compressed into memory and can be scanned, queried and aggregated very quickly. Power Pivot models (stored as an Excel .xlsx file) can be uploaded to a SharePoint where they become a server-managed resource. A Power Pivot model can also be promoted to a server-hosted SSAS Tabular model where data is not only managed and queried on an enterprise server but also takes on many of the features and capabilities of classic SSAS multidimensional database. Whether a Power Pivot model is published to a SharePoint library or promoted to a full-fledged SSAS Tabular model, the data can be queried by any client tool as if it were an Analysis Services cube.
For now, Power View in Excel 2013 ProPlus and Power View in SharePoint 2010 Enterprise and SharePoint 2013 Enterprise remain the same – the Silverlight-based drag-and-drop visual analytic tool. With the addition of SQL Server 2012 CU4, Power View in SharePoint can be used with SharePoint published Power Pivot models, SSAS Tabular models and SSAS Multidimensional “cube” models. There has been no news yet about a non-Silverlight replacement for the on-premise version of Power View. The Microsoft teams and leadership have heard the requests and feedback, loud-and-clear, from the community and we can only guess that there is more is in-the-works but I make no forecast or assumptions about the eventual availability of an on-premise offering similar to Power BI for Office 365.
I’m very excited to see my first feature article published in SQL Server Pro Magazine titled Custom Programming to Enhance SSRS Reports; How to write a custom assembly and use it to build a dynamic report dataset. The article was posted online in April and featured in the July printed and electronic edition. SQL Server Pro (formerly known as “SQL Server Magazine” or “SQLMag”) is published by Penton Media and is the largest publication for the SQL Server community. Please read the article, download the code, work through the exercise and let me know if you have comments or questions.
I posted an early draft of this article in my blog last year titled Using Custom Assemblies in Reports to Generate Query Logic (parts 1 and 2). The code was cleaned-up and tech edited in the new article which I recommend as the most reliable source (not that I write bad code, mind you, but it never hurts to have a formal tech review.)