Demystifying the Power BI XMLA Endpoint

When the Power BI product team began promoting the new XMLA endpoint connectivity for Power BI I thought that this could be a game changer for IT-driven, enterprise class BI solutions. Now that I have used it in a few real solutions and actively working with clients, I’m sharing my experience about how it works and what you can do with it. The read-only endpoint has been in GA for Premium capacities and the read/write endpoint is currently in preview.

Before my long-winded introduction, I’ll get to the point:
Using the XMLA endpoint, the Power BI Premium service now includes the capabilities of SQL Server Analysis & Azure Analysis Services combined with newer data modeling capabilities of Power BI. Data models published to the Power BI service now support version control, scripted builds and team application life cycle management, enterprise IT tooling and scripted object management.

…wait, what? How?

Let’s just start by reviewing some of the challenges that have existed in the Power BI platform prior to the availability of this capability:

  • SSAS/AAS enterprise features
    Buried deep within the Power BI cloud service is the SQL Server Analysis Services (SSAS) Tabular model “Vertipaq” in-memory analytics database engine. The SSAS engine & on-prem product itself has many useful features not exposed in the Power BI implementation of data models and datasets. These “enterprise class” features are numerous including object-level scripting for deployments, source control, application life cycle, continuous integration and build management, data partitioning, perspectives, translations, currency conversion, KPI definitions, measure groups and calculation groups.
  • SSAS/AAS redundant services & costs
    At one point, Azure Analysis Services was a superset of data modeling features but now many exclusive features are available in Power BI datasets in a Premium capacity workspace so this choice isn’t so clear. Power BI and AAS also have separate costs. As the Power BI services continues to evolve, many new an compelling features are available only in Power BI datasets and not in Analysis Services; such as hybrid mixed-mode data models, aggregations, incremental refresh policies, dataflows and AutoML.
  • Source control, Application life cycle, builds & Team data model development
    Power BI Desktop is a convenient tool for creating an entire Power BI solution from soup to nuts but isn’t optimized for IT scale projects. A single PBIX file contains connections, queries, data model objects, measures and report pages. A PBIX file structure cannot be easily parsed, compared, branched or merged by established source control and build management tools. Analysis Services projects on the other hand support Microsoft development tools like Visual Studio, Azure DevOps and SQL Server Management Studio. Several mature third-party development tools like Tabular Editor, DAX Studio and the ALM Toolkit enhance or exceed the features of Microsoft’s development suite.

The XMLA endpoint in Power BI Premium bridges this divide by exposing the underlying Analysis Services database instances in the Power BI service. It really is as simple as that. Each workspace is in fact an SSAS instance and each Power BI dataset is an SSAS database. What you can do with traditional SSAS through management and development tools, TMSL or XMLA script, PowerShell or API automation; you can do with Power BI.

Use a Premium capacity workspace

If you have Premium capacity setup in your tenant, you are good to go. Several of our consulting clients do but to experiment and test new features, I use my own private tenant. In lieu of paying for Premium capacity, I can setup an embedded capacity in the Azure portal. Sizing the embedded capacity to A4 is the same as a P1 premium capacity but it can be paused when I don’t need to use it. The cost is about $8.50 per hour so MAKE SURE TO PAUSE WHEN DONE.

Here’s the Power BI Embedded capacity I created in my Azure tenant, sized to A4 and currently paused. It takes about a minute to start or pause the embedded service.

After starting the capacity, I jump over to the Power BI Admin Portal and click the capacity name to change settings. Note that because my capacity was added as an embedded service, it shows up in the Power BI Embedded page but the settings are the same for a Premium capacity.

Enable the Read/Write endpoint

By default, the XMLA endpoint will be set to Read Only. Under the Workloads group on the Capacity settings page, switch XMLA endpoint to Read Write and then save the changes.

Now any workspace set to use Premium capacity can be accessed with an XMLA endpoint address. I’ve created a test workspace in my tenant to test migrating an SSAS tabular database to a Power BI dataset. I select the workspace in the Power BI Portal and in the workspace settings, make sure that Dedicated capacity is switched on. I know that it is because of the diamond icon next to the workspace name. The Workspace Connection address is below. Click the Copy button to get the address on my clipboard.

Now, I can use that address in any tool that knows how to connect to SSAS tabular in compatibility level 1450 or above. Let’s try to connect using SQL Server Management Studio. I need a newer version of SSMS, 18.4 or higher.

Connect to SQL Server Analysis Services and paste the XMLA endpoint address for the Server name. You need to use Azure Active Directory authentication. If your organization uses MFA, you can use that option but I will choose regular AAD authentication.

…and viola! Connected.

I can run queries and scripts here in DAX, MDX, XMLA or TMSL. Most but currently not all capabilities are supported in the preview. In particular, if you have RLS roles, the members must be dropped and then added back in the Power BI Portal.

So far, I have scripted existing Power BI datasets and migrated them to Analysis Services projects in Visuals Studio, and then deployed to a new dataset from Visual Studio. The learning here is that migration is a one-way street from Desktop to Visual Studio. Whether development starts in Power BI Desktop or Visual Studio, there is no going back to Desktop. Ongoing development must be in Visual Studio.

Definitions: “Report”, “Database” and “Dataset

In the self-service Power BI world, the term “Report” has been used to mean at least two different things. With the queries and data model managed separately and deployed as a Power BI dataset, the term “Report” in this context means only report pages with visuals (don’t get me started talk about “dashboards”). A data model originating from a Power BI Desktop file is published as a dataset. Now that we see these objects through the lens of Analysis Services, a Power BI dataset is a Database.

In migrated projects, continue to author and manage reports in Desktop connected to the deployed dataset as an Analysis Services connection. You can actually switch the connection between an SSAS instance, an AAS instance or a published dataset using the XMLA endpoint address. I usually use query parameters for the Server and Database to easily change these values.

There’s a lot more that I will need to cover in later posts, but I’ll mention a few things briefly.

Best practices: After working with the XMLA endpoint, the choices now seem pretty clear to me but there was some confusion until I got to that point. Best practices will continue to emerge. In light of this and other recent announcements, I can say that I have a much clearer vision for how to plan and manage solutions (and to delineate between self-service and enterprise BI projects) than I did a year ago. If you have questions, please post them in the comments and I’ll do my best to address them in future posts.

Combining & querying datasets: The ability to use the the endpoint to query one data model from another model enables some very compelling composite model scenarios – but planning these solutions is important.

Large models & storage: The size and scale limitations are similar to those in AAS and generally only limited by your chosen Premium capacity. Since models are compressed and typically only include column data needed for analytic reporting, it is unusual to see datasets larger than a few gigabytes but Premium will support model sizes up to 400 GB.

By default, datasets deployed using the endpoint are stored in the Azure data centers using single file storage. This is fine for small models but after creating larger partitioned models, using large file storage will improve development and deployment performance by managing objects in multiple files on the instance backend. There will eventually be a UI for this setting but it is currently available only through the management API or via PowerShell.

25 thoughts on “Demystifying the Power BI XMLA Endpoint

  1. Hi Paul,

    Thank you for the excellent blog which addresses most of the queries. I too have an issue with a model developed in VS and deployed to Power BI Premium workspace is not displaying default aggregate functions like sum, average, min, max, count etc.., where as a model deployed to Embedded workspace with same tables has the aggregations appearing.

    Is that the workspace features between Premium Vs Embedded is making the difference. If so how would we address the issue or please advise what is making the difference in the model in displaying the aggregate functions in the model deployed to Power BI service.

  2. Paul First of all , this is really a nice and detailed article ..thanks for it
    Did you get a chance to implement RLS ,Object Level Security or Translations on the PBI model … I heard RLS , OLS and Translations are not yet supported in PBI Models.. .any thoughts?

  3. FYI, if anyone is getting the following error when deploying a model to the workspace via Visual Studio:

    “Cannot deploy metadata. Reason: An error occurred while connecting to the server”.

    I discovered that because my AAD account is MFA enabled, this will not work. I tried on the same work station/config using a non MFA enabled account and the push to the Workspace succeeded via Visual Studio.

    Workaround would be to open the model in TabularEditor and deploy to the PBI workspace from it.

    • Update: I contacted Microsoft about this and they say MFA should be supported via VS and listed out things to check; however, after going through their checklist it still doesn’t work for my MFA enabled account. Maybe it’s something with my account but TabularEditor is able to deploy using my account. I entered a support ticket with them. I will update this comment in the future once it is resolved.

      https://github.com/MicrosoftDocs/powerbi-docs/issues/2379

  4. Pingback: I Am Done Using Visual Studio! | Paul Turley's SQL Server BI Blog

  5. To migrate a PBIS dataset to a VS project couldn’t you connect to the service using SSMS, script out the database to an xmla file, connect to your SSAS instance and run the xmla script to create the database in SSAS, then use the Import from Server feature in VS to create a new project?

  6. Paul – Could you provide few more details on how to import PBI Dataset into Visual studio? I tried using VS 2019 Import Tabular model from Server but this option come up with an error – Importing from Power BI End point is not supported.

    I’m able to connect using tabular editor & SSMS perfectly fine and also able to deploy tabular model onto PBI service from VS.

    I see that you’ve scripted out the PBI database and somehow imported that into Visual studio?

    Thanks

  7. Great blog Paul! Few queries:
    1. After deploying AAS model from Visual Studio to Premium workspace have you
    tried to schedule AAS partitions(or dataset partitions)? I can connect Premium
    workspace via SSMS and refresh manually but not able to schedule via
    Runbooks.
    2. For large IT models Visual Studio is a great tool for the data modeling. But Power
    BI desktop provides out of the box Incremental refresh feature. And in all
    Christian Wade’s demo he prefers desktop for modeling and not Visual Studio.
    Soon with ALM Toolkit and Tabular Editor added to Desktop should we prefer
    moving data modeling to Power BI desktop?

    Thanks!

    • Sunny, there’s a lot of “it depends” to this question and the choices are not perfectly clear, with pros and cons either way. If the features of Power BI Desktop meet all of the requirements, it is a more streamlined and convenient tool to use. Incremental Refresh is an effective way to partition tables automatically but you cannot script or manually process any of those partitions. Using SSDT will give you that kind of flexibility but you lose the more modern feel of Power BI Desktop after making that transition. I also prefer to use Desktop but SSDT gives me more control for larger, formal IT data model projects.
      You can use PowerShell to automate partition processing and that script will run from Azure Data Factory.

  8. Great article Paul, thanks! I am trying to find a way to develop composite models in Visual Studio 2019, with 1500 compatibility level set and the plan is to deploy to XMLA Endpoint in the Power BI Service…
    [1]
    I am battling to see how this works, is it not supported by VS? It seems that the entire model must either be Import or DirectQuery – a mix of the two does not seem to be available?
    [2]
    Additionally the “modern get data experience” seems to have a severely reduced set of connectors to choose from.

    Your input would be much appreciated!

  9. Paul, thanks for a great article. I was interested in your observation “The ability to use the endpoint to query one data model from another enables some very compelling composite model scenarios”

    Does this mean that the DAX in one cube could query data in another cube?

    • John, yes. A query in one data model (published PBIX file) can run a DAX query that imports data into the data model from another published dataset. The query results are cached so it is not a live connection to the second data model. I am doing this in a project for a client and it is working nicely. Later this year we are likely to have live composite models but this is a good option when you just need to cache results or copy a table from another data model.

  10. Hi Paul – Excellent blog.
    I have used xmla end point to connect profiler and trace queries but important take away from this blog is using xmla write to deploy SSAS project onto PBI workspace and to import the dataset back into VS.

    Questions)

    1) I have used xmla endpoint within pbi desktop to combine a local datasource like csv and data from pbi dataset using xmla end point connect to dataset. I’m guessing this is the current workaround for Composite models feature being released in sept 2020? Do you have any more info about composite model feature n how will it be different to this?

    2) Do you recommend using Visual studio to begin creating ssas model as a starting point to be then deployed to pbi workspace? An incompatible feature list would be good to have .. example – roles that you mentioned.. they need to be recreated ?

    3) For a dataset having 50 dataflow with 50 entities ( 1 entity per dataflow) with direct query or live connection, will the usual processing options like process data + process recalc work with xmla end point?

    4) i suppose you dont plan on using release pipeline for deployments if VS is being used?

    Thanks

    • Rakesh, these are great questions. I’ll comment on them int he same order:
      1) Using the XMLA endpoint, like any connection to Analysis Services, support a live connection only when it is the only connection in the model. Currently, in a composite model (when other query connections exist), only import model is supported. This means that DAX or MDX queries issued to SSAS, AAS or a published dataset via the XMLA endpoint will create a copy of that data in the new model. The product teams are working on adding support for DirectQuery connections to SSAS which is mentioned in the Power BI product roadmap: https://docs.microsoft.com/en-us/power-platform-release-plan/2020wave1/business-intelligence/planned-features

      2) With the ability to migrate from a .PBIX to .BIM (Desktop to Visual Studio), I don’t think it is necessary to start with Visual Studio. In many ways, Power BI Desktop is an easier tool and I prefer to start there. In a formal IT project, I would probably start with Visual Studio. I agree that a compatibility list would be great to have.

      3) I have to admit that I’m not certain about cross-compatibility with dataflows. As far as I know, using dataflows doesn’t diminish the ability to partition and process a model but I’m not certain and actually exploring this question for a client at the moment. I’ll be interested in your experience. Disregarding the dataflows question, you have all the processing options when using the XMLA endpoint.

      4) Release pipelines manage the lifecycle of workspaces and workspace apps that contain reports and dashboards in addition to datasets. Using the XMLA endpoint to manage dataset development & deployment could be used either WITH or INSTEAD OF datasets managed in a deployment pipeline. This is a compelling question and I think it will take some time to sort out the best practice.

Leave a Reply to Paul Turley Cancel reply