Demystifying the Power BI XMLA Endpoint

When the Power BI product team began promoting the new XMLA endpoint connectivity for Power BI I thought that this could be a game changer for IT-driven, enterprise class BI solutions. Now that I have used it in a few real solutions and actively working with clients, I’m sharing my experience about how it works and what you can do with it. The read-only endpoint has been in GA for Premium capacities and the read/write endpoint is currently in preview.

Before my long-winded introduction, I’ll get to the point:
Using the XMLA endpoint, the Power BI Premium service now includes the capabilities of SQL Server Analysis & Azure Analysis Services combined with newer data modeling capabilities of Power BI. Data models published to the Power BI service now support version control, scripted builds and team application life cycle management, enterprise IT tooling and scripted object management.

…wait, what? How?

Let’s just start by reviewing some of the challenges that have existed in the Power BI platform prior to the availability of this capability:

  • SSAS/AAS enterprise features
    Buried deep within the Power BI cloud service is the SQL Server Analysis Services (SSAS) Tabular model “Vertipaq” in-memory analytics database engine. The SSAS engine & on-prem product itself has many useful features not exposed in the Power BI implementation of data models and datasets. These “enterprise class” features are numerous including object-level scripting for deployments, source control, application life cycle, continuous integration and build management, data partitioning, perspectives, translations, currency conversion, KPI definitions, measure groups and calculation groups.
  • SSAS/AAS redundant services & costs
    At one point, Azure Analysis Services was a superset of data modeling features but now many exclusive features are available in Power BI datasets in a Premium capacity workspace so this choice isn’t so clear. Power BI and AAS also have separate costs. As the Power BI services continues to evolve, many new an compelling features are available only in Power BI datasets and not in Analysis Services; such as hybrid mixed-mode data models, aggregations, incremental refresh policies, dataflows and AutoML.
  • Source control, Application life cycle, builds & Team data model development
    Power BI Desktop is a convenient tool for creating an entire Power BI solution from soup to nuts but isn’t optimized for IT scale projects. A single PBIX file contains connections, queries, data model objects, measures and report pages. A PBIX file structure cannot be easily parsed, compared, branched or merged by established source control and build management tools. Analysis Services projects on the other hand support Microsoft development tools like Visual Studio, Azure DevOps and SQL Server Management Studio. Several mature third-party development tools like Tabular Editor, DAX Studio and the ALM Toolkit enhance or exceed the features of Microsoft’s development suite.

The XMLA endpoint in Power BI Premium bridges this divide by exposing the underlying Analysis Services database instances in the Power BI service. It really is as simple as that. Each workspace is in fact an SSAS instance and each Power BI dataset is an SSAS database. What you can do with traditional SSAS through management and development tools, TMSL or XMLA script, PowerShell or API automation; you can do with Power BI.

Use a Premium capacity workspace

If you have Premium capacity setup in your tenant, you are good to go. Several of our consulting clients do but to experiment and test new features, I use my own private tenant. In lieu of paying for Premium capacity, I can setup an embedded capacity in the Azure portal. Sizing the embedded capacity to A4 is the same as a P1 premium capacity but it can be paused when I don’t need to use it. The cost is about $8.50 per hour so MAKE SURE TO PAUSE WHEN DONE.

Here’s the Power BI Embedded capacity I created in my Azure tenant, sized to A4 and currently paused. It takes about a minute to start or pause the embedded service.

After starting the capacity, I jump over to the Power BI Admin Portal and click the capacity name to change settings. Note that because my capacity was added as an embedded service, it shows up in the Power BI Embedded page but the settings are the same for a Premium capacity.

Enable the Read/Write endpoint

By default, the XMLA endpoint will be set to Read Only. Under the Workloads group on the Capacity settings page, switch XMLA endpoint to Read Write and then save the changes.

Now any workspace set to use Premium capacity can be accessed with an XMLA endpoint address. I’ve created a test workspace in my tenant to test migrating an SSAS tabular database to a Power BI dataset. I select the workspace in the Power BI Portal and in the workspace settings, make sure that Dedicated capacity is switched on. I know that it is because of the diamond icon next to the workspace name. The Workspace Connection address is below. Click the Copy button to get the address on my clipboard.

Now, I can use that address in any tool that knows how to connect to SSAS tabular in compatibility level 1450 or above. Let’s try to connect using SQL Server Management Studio. I need a newer version of SSMS, 18.4 or higher.

Connect to SQL Server Analysis Services and paste the XMLA endpoint address for the Server name. You need to use Azure Active Directory authentication. If your organization uses MFA, you can use that option but I will choose regular AAD authentication.

…and viola! Connected.

I can run queries and scripts here in DAX, MDX, XMLA or TMSL. Most but currently not all capabilities are supported in the preview. In particular, if you have RLS roles, the members must be dropped and then added back in the Power BI Portal.

So far, I have scripted existing Power BI datasets and migrated them to Analysis Services projects in Visuals Studio, and then deployed to a new dataset from Visual Studio. The learning here is that migration is a one-way street from Desktop to Visual Studio. Whether development starts in Power BI Desktop or Visual Studio, there is no going back to Desktop. Ongoing development must be in Visual Studio.

Definitions: “Report”, “Database” and “Dataset

In the self-service Power BI world, the term “Report” has been used to mean at least two different things. With the queries and data model managed separately and deployed as a Power BI dataset, the term “Report” in this context means only report pages with visuals (don’t get me started talk about “dashboards”). A data model originating from a Power BI Desktop file is published as a dataset. Now that we see these objects through the lens of Analysis Services, a Power BI dataset is a Database.

In migrated projects, continue to author and manage reports in Desktop connected to the deployed dataset as an Analysis Services connection. You can actually switch the connection between an SSAS instance, an AAS instance or a published dataset using the XMLA endpoint address. I usually use query parameters for the Server and Database to easily change these values.

There’s a lot more that I will need to cover in later posts, but I’ll mention a few things briefly.

Best practices: After working with the XMLA endpoint, the choices now seem pretty clear to me but there was some confusion until I got to that point. Best practices will continue to emerge. In light of this and other recent announcements, I can say that I have a much clearer vision for how to plan and manage solutions (and to delineate between self-service and enterprise BI projects) than I did a year ago. If you have questions, please post them in the comments and I’ll do my best to address them in future posts.

Combining & querying datasets: The ability to use the the endpoint to query one data model from another model enables some very compelling composite model scenarios – but planning these solutions is important.

Large models & storage: The size and scale limitations are similar to those in AAS and generally only limited by your chosen Premium capacity. Since models are compressed and typically only include column data needed for analytic reporting, it is unusual to see datasets larger than a few gigabytes but Premium will support model sizes up to 400 GB.

By default, datasets deployed using the endpoint are stored in the Azure data centers using single file storage. This is fine for small models but after creating larger partitioned models, using large file storage will improve development and deployment performance by managing objects in multiple files on the instance backend. There will eventually be a UI for this setting but it is currently available only through the management API or via PowerShell.

COVID-19 Three Day Change Report

In my ongoing quest to present the Coronavirus data from the CDC and WHO in useful ways, I have created another set of report pages. There are to primary pages that show all countries with the percent of change in non-recovered cases in the past three days. Our previous reports provided a lot of information for deep analysis but it wasn’t so easy to make quick comparisons and to see how a state or country was progressing. This latest report connects to the same dataset but will let you see how the numbers have changed in the past three days across countries and states.

Click the following screen image to view the publish-to-web report. Just right-click a country to drill-through and view state/province three-day changes. The CDC and WHO provide state & province case data only for the United States, Canada and Australia.

I’ll update the GitHub repository and the template app with these additions as well.

Non-recovered cases are the number of confirmed cases less the number of recovered cases. The “Confirmed Cases” metric used in reports we see in most news programs and websites is the all-time accumulated total for a country or state. Attempts to report only the number of active cases has been inconsistent across geographies. This method to calculate the recovery rate seems most reliable at the country level but we don’t yet have consistent recovery numbers at the state and county level in the unified data set.

Roadtrip, Climbing Case Numbers and Public Perception

Having this information at our fingertips makes a huge difference. My wife and I are currently on an unplanned road trip attending to some urgent family business. In light of the outbreak, we are driving about 3,500 miles rather than flying. Driving through Oregon, Idaho, Utah, Arizona and New Mexico; it is apparent that the situation is perceived very differently in different areas. For example, the Navajo reservation is observing curfew and no one is allowed to be outside their homes under penalty of law. In New Mexico, absolutely everyone must wear a mask to go into a store. But in other places where we have traveled in the past two days, it is life as usual with little or no restrictions – even though the number of confirmed cases has risen by about fifteen percentage points in the past three days. Here in New Mexico, cases have increased by about 28% so we will be wearing our masks and cautiously practicing social distancing.

Countries with Improved COVID-19 Recovery Rates

Is there any good news in the COVID-19 Coronavirus data? Everyone is familiar with the tragic growing number cases and fatalities globally and in most countries that have high populations. However, several countries have increasing recovery rates. You can click here to view the new Power BI report shown in the image below.

The experts keep taking about looking for “the peak”, when the number of people recovering from infection increases faster than new cases. We’re not there yet but where is the recovery rate increasing? The “Recovery Rate (3 Day Change)” measure is the Recovery Rate (e.g. The number of confirmed cases divided by the number of recovered cases) from three days prior to the most recent report date. This report ranks countries by this measure.

On the left side are the countries highest ranked in improved recovery rates over the past three days (pictured as of April 10). The line chat on the right shows the top 20 countries with their trend of Recovery Rates over the past 30 days.

Among these are countries with small populations or those with a small number of confirmed COVID cases, so even a small number of recovered cases yields a high ratio. To remove some of these outliers, the report page only includes countries with 100 or more cases. Some could have had a high infect rate that has since slowed or it could be a lag in the available recovery date. This report uses the dataset from the main COVID-19 Daily Updates report but just focuses on recover rates.

By the time you read this, the top 15 countries by improved recovery rates may change. Right now, these countries have the highest increase in recovery rates. Why is this and what can we learn from these results?

  • Djibouti
  • Algeria
  • South Africa
  • Guatemala
  • El Salvador
  • Austria
  • Australia
  • Germany
  • Cambodia
  • Azerbaijan
  • Brunei
  • Iraq
  • Venezuela
  • Iceland

I Need Your Questions about Power BI and Paginated Reports

For a new series of posts to include tutorials and “how to” demonstrations for Power BI, SQL Server Reporting Services (SSRS) and Paginated Reports, I need your help to know what questions are most compelling and top of mind.

Please scroll to the Comments section at the bottom of this post and help me create a list with your questions and suggested topics.

What questions do you have and what do you need to learn how to do?

Make suggestions and cast your votes on others’ ideas for video posts, demos and tutorials you would like to see. I will add these to a list and work through as many as I can. The following are some rough topic areas and ideas to get started:

Power BI Solution Questions:

  • Power Query techniques
  • Data modeling techniques
  • DAX calculation techniques
  • Visualization best practices
  • How to design a specific report feature
  • Of the many choices or approaches to address a business need, which is a recommended practice?

Paginated Reports & SSRS Questions:

  • How do Paginated Reports work in the Power BI cloud service?
  • How are Paginated Reports and SSRS licensed from Microsoft?
  • When should I use Power BI vs SSRS or Paginated Reports?
  • Navigating between different report types

New Coronavirus (COVID-19) Daily Updates Report

Updated again on April 7 2020. See video tutorial below

After publishing the original daily COVID-19 cases report on March 14, Johns Hopkins University changed the file format. This required us to publish an updated dataset with a new report, which you can access >here<.

Since the original solution was posted, I have received a tremendous amount of feedback, suggestions for enhancements and corrections. I introduced the tool in the earlier post. Members of the Microsoft MVP program have been a lot of help and several of my colleagues from Pragmatic Works have jumped in to add their insights and design support. We’re working on a Power BI app template that can installed into a Power BI tenant. In the interim, you can access the publicly accessible report through this link.

Using the report, we can track the daily progression of confirmed cases, recovered cases and deaths by country. Where available, these metrics are also available state or province and US counties.

The current project and future updates can be accessed using this GitHub repo.

There are now three different ways to access the COVID-19 Daily Updates Power BI dataset & report:

COVID-19 Coronavirus Daily Updates in Power BI

March 24 update – This post will be updated frequently for the next few days. Please watch for updates.

This is a Power BI report (<link) I have developed and published with public access to the web to be shared with anyone who needs access to this information. It contains daily updates from the Center for Disease Control (CDC) using data curated by the Johns Hopkins University Center for Systems Science & Engineering. To the best of our collective ability, the data is accurate but I cannot make any guarantees. Please validate with other sources before making any decisions with this information.

Additional enhancements and contributions are being made by Microsoft MVPs and community members:

After the initial version, my colleague Robin Abramson spent late evenings and a weekend to help work-through design details. I appreciate members of the Microsoft MVP community, Reza Rad and Miguel Escobar, stepping in to help with query updates to get scheduled data refresh working.

I’m very hopeful that the this report will be a valuable resource. It’s been a labor of love and considerably more work that I envisioned. But, I will continue to work on enhancements and corrections as I am able – based on feedback. I started working on this project to help a consulting client try to understand how the virus outbreak is affecting their customer order shipment delays and materials supply chain. That grew into a off-the-clock side project, demanding nights and weekends to get this far. Now, I hope we can use this information to proactively respond to this threat.

Please post comments here or contact me through Twitter, if you have feedback, comments and questions.

link to published report

The CDC and WHO began collecting COVID-19 case information from various sources on January 22 with the latest count of confirmed cases, recovered cases and deaths recorded by country, state or province. John Hopkins University collect this data every day and store files in a publicly accessible GitHub repository. On March 1st, they began geocoding the location for each case, where available, with the latitude and longitude. Location information is sparse but available frequently enough to observe trending.

Pete Gil at Pragmatic Works initially discovered this data source from another report published at worldometers.info. He scraped their web page and created an attractive Power BI report with the latest daily numbers. Springboarding from that project, I went back to the source files and created this new data model with daily snapshots and cumulative updates.

Watch for updates (where I’ll explore the design and more details) but this a quick tour of the initial set of report pages based on my published data model:

The first page provides some background information about data sources, credits and a report page menu:

Use the bookmark buttons to navigate to each page. You can also use the page number navigation buttons below the report.

The three measures displayed at the top of this and other pages show the latest counts, as of the highest select date range. Use the range slicer to limit the time-series charts and to set the “as of” date for the latest measures (Confirmed, Recovered and Deaths).

Right-click the line/area or stacked column charts to drill-through to details for a specific date.

The Global Cases page displays the aggregate case counts by country and for known locations. You can switch between the three measures using the radio button slicer. This changes every value on this page to use the selected measure.

On every page, you can narrow the view of countries using the Country Region drop-down list slicer. Use this slicer to deselect countries that have a high number so you can view and compare other countries. Hold Ctrl to select and deselect multiple items from the slicer item list.

The Country shape map definition was created by David Eversvelt. I’ve made some modifications to accommodate country names provided by the CDC.

I have created three separate pages with Country/State & Province maps. Only a limited number of shape map files are available in Power BI so I have selected the US, Canada and Australia for now.

Either use drillthrough or navigate to the Detail Matrix page. The matrix shows the progression of the selected measure over time within a region. Expand the geography hierarchy to view details by states or provinces for a country. The date range slicer in the top-right can be used to control the range of dates displayed as columns. Within the scope of the displayed data, the largest values are displayed with graduating shades of red.

To narrow the comparison, use the Country Region slicer to filter by country and change the scope of the conditionally colored cells. This allows you to remove irrelevant regions and focus on those of interest.

The Novel COVID-19 Coronavirus outbreak is a serious matter that is affecting our world in ways that we are only beginning to understand. If we can use this data to better understand what is happening , maybe we can use this information to mitigate the affects if this global event.

What questions do you need to answer and how do you need to use this information?

How can we look at it differently to provide better insight?

How do you need to correlate the the state of cases with other data to make decisions and forecast outcomes?

Correlating Corona Virus Outbreaks with Business Data using the ESRI Map in Power BI

At the time of this post, the world is dealing with a serious health and economic crisis. The COVID-19 Corona Virus is impacting the lives of people around the world and in turn, it is affecting world markets and industries and many different ways. For example, I am working with a consulting client whose material shipping and supply chain are being are impacted by the breakout and they need to quickly respond by making order changes and logistics choices. Forecasting and planning analysts must make adjustments to help the company prepare for these impactful changes.

This demonstration shows you how to create a multi-layer map to correlate current outbreak case locations with your own data, using Power BI and the ESRI map visual. I’m using sample data for demonstration but this is the same technique I am using for our client. In the real data set, correlation is impactful where shipping orders are being delayed and cancelled in areas most affected. For more, visit my blog at SqlServerBiBlog.com.

The technique used in this map report is relatively easy to implement because both data sources are separate feeds to the map service. There are different ways to correlate map data from two different sources. In our solution, we are also integrating the CDC data into the data model, which will allow us to perform comparison calculations. Using AI and machine learning, we may be able to perform predictions and associations.