Demystifying the Power BI XMLA Endpoint

When the Power BI product team began promoting the new XMLA endpoint connectivity for Power BI I thought that this could be a game changer for IT-driven, enterprise class BI solutions. Now that I have used it in a few real solutions and actively working with clients, I’m sharing my experience about how it works and what you can do with it. The read-only endpoint has been in GA for Premium capacities and the read/write endpoint is currently in preview.

Before my long-winded introduction, I’ll get to the point:
Using the XMLA endpoint, the Power BI Premium service now includes the capabilities of SQL Server Analysis & Azure Analysis Services combined with newer data modeling capabilities of Power BI. Data models published to the Power BI service now support version control, scripted builds and team application life cycle management, enterprise IT tooling and scripted object management.

…wait, what? How?

Let’s just start by reviewing some of the challenges that have existed in the Power BI platform prior to the availability of this capability:

  • SSAS/AAS enterprise features
    Buried deep within the Power BI cloud service is the SQL Server Analysis Services (SSAS) Tabular model “Vertipaq” in-memory analytics database engine. The SSAS engine & on-prem product itself has many useful features not exposed in the Power BI implementation of data models and datasets. These “enterprise class” features are numerous including object-level scripting for deployments, source control, application life cycle, continuous integration and build management, data partitioning, perspectives, translations, currency conversion, KPI definitions, measure groups and calculation groups.
  • SSAS/AAS redundant services & costs
    At one point, Azure Analysis Services was a superset of data modeling features but now many exclusive features are available in Power BI datasets in a Premium capacity workspace so this choice isn’t so clear. Power BI and AAS also have separate costs. As the Power BI services continues to evolve, many new an compelling features are available only in Power BI datasets and not in Analysis Services; such as hybrid mixed-mode data models, aggregations, incremental refresh policies, dataflows and AutoML.
  • Source control, Application life cycle, builds & Team data model development
    Power BI Desktop is a convenient tool for creating an entire Power BI solution from soup to nuts but isn’t optimized for IT scale projects. A single PBIX file contains connections, queries, data model objects, measures and report pages. A PBIX file structure cannot be easily parsed, compared, branched or merged by established source control and build management tools. Analysis Services projects on the other hand support Microsoft development tools like Visual Studio, Azure DevOps and SQL Server Management Studio. Several mature third-party development tools like Tabular Editor, DAX Studio and the ALM Toolkit enhance or exceed the features of Microsoft’s development suite.

The XMLA endpoint in Power BI Premium bridges this divide by exposing the underlying Analysis Services database instances in the Power BI service. It really is as simple as that. Each workspace is in fact an SSAS instance and each Power BI dataset is an SSAS database. What you can do with traditional SSAS through management and development tools, TMSL or XMLA script, PowerShell or API automation; you can do with Power BI.

Use a Premium capacity workspace

If you have Premium capacity setup in your tenant, you are good to go. Several of our consulting clients do but to experiment and test new features, I use my own private tenant. In lieu of paying for Premium capacity, I can setup an embedded capacity in the Azure portal. Sizing the embedded capacity to A4 is the same as a P1 premium capacity but it can be paused when I don’t need to use it. The cost is about $8.50 per hour so MAKE SURE TO PAUSE WHEN DONE.

Here’s the Power BI Embedded capacity I created in my Azure tenant, sized to A4 and currently paused. It takes about a minute to start or pause the embedded service.

After starting the capacity, I jump over to the Power BI Admin Portal and click the capacity name to change settings. Note that because my capacity was added as an embedded service, it shows up in the Power BI Embedded page but the settings are the same for a Premium capacity.

Enable the Read/Write endpoint

By default, the XMLA endpoint will be set to Read Only. Under the Workloads group on the Capacity settings page, switch XMLA endpoint to Read Write and then save the changes.

Now any workspace set to use Premium capacity can be accessed with an XMLA endpoint address. I’ve created a test workspace in my tenant to test migrating an SSAS tabular database to a Power BI dataset. I select the workspace in the Power BI Portal and in the workspace settings, make sure that Dedicated capacity is switched on. I know that it is because of the diamond icon next to the workspace name. The Workspace Connection address is below. Click the Copy button to get the address on my clipboard.

Now, I can use that address in any tool that knows how to connect to SSAS tabular in compatibility level 1450 or above. Let’s try to connect using SQL Server Management Studio. I need a newer version of SSMS, 18.4 or higher.

Connect to SQL Server Analysis Services and paste the XMLA endpoint address for the Server name. You need to use Azure Active Directory authentication. If your organization uses MFA, you can use that option but I will choose regular AAD authentication.

…and viola! Connected.

I can run queries and scripts here in DAX, MDX, XMLA or TMSL. Most but currently not all capabilities are supported in the preview. In particular, if you have RLS roles, the members must be dropped and then added back in the Power BI Portal.

So far, I have scripted existing Power BI datasets and migrated them to Analysis Services projects in Visuals Studio, and then deployed to a new dataset from Visual Studio. The learning here is that migration is a one-way street from Desktop to Visual Studio. Whether development starts in Power BI Desktop or Visual Studio, there is no going back to Desktop. Ongoing development must be in Visual Studio.

Definitions: “Report”, “Database” and “Dataset

In the self-service Power BI world, the term “Report” has been used to mean at least two different things. With the queries and data model managed separately and deployed as a Power BI dataset, the term “Report” in this context means only report pages with visuals (don’t get me started talk about “dashboards”). A data model originating from a Power BI Desktop file is published as a dataset. Now that we see these objects through the lens of Analysis Services, a Power BI dataset is a Database.

In migrated projects, continue to author and manage reports in Desktop connected to the deployed dataset as an Analysis Services connection. You can actually switch the connection between an SSAS instance, an AAS instance or a published dataset using the XMLA endpoint address. I usually use query parameters for the Server and Database to easily change these values.

There’s a lot more that I will need to cover in later posts, but I’ll mention a few things briefly.

Best practices: After working with the XMLA endpoint, the choices now seem pretty clear to me but there was some confusion until I got to that point. Best practices will continue to emerge. In light of this and other recent announcements, I can say that I have a much clearer vision for how to plan and manage solutions (and to delineate between self-service and enterprise BI projects) than I did a year ago. If you have questions, please post them in the comments and I’ll do my best to address them in future posts.

Combining & querying datasets: The ability to use the the endpoint to query one data model from another model enables some very compelling composite model scenarios – but planning these solutions is important.

Large models & storage: The size and scale limitations are similar to those in AAS and generally only limited by your chosen Premium capacity. Since models are compressed and typically only include column data needed for analytic reporting, it is unusual to see datasets larger than a few gigabytes but Premium will support model sizes up to 400 GB.

By default, datasets deployed using the endpoint are stored in the Azure data centers using single file storage. This is fine for small models but after creating larger partitioned models, using large file storage will improve development and deployment performance by managing objects in multiple files on the instance backend. There will eventually be a UI for this setting but it is currently available only through the management API or via PowerShell.

Paul Turley

Paul Turley

Microsoft Data Platform MVP, Principal Consultant for 3Cloud Solutions Specializing in Business Intelligence, SQL Server solutions, Power BI, Analysis Services & Reporting Services.

48 thoughts on “Demystifying the Power BI XMLA Endpoint

  1. Hi Paul,

    Could you please advise on why do we get below error in PBI Service. The error popped up when the tabular model was deployed to service and tried creating a report on the dataset and also unable to see the model view as well in the service like list of attributes under the tables. Though not refreshed the dataset atleast once through service it directly throws the below error message.

    NOTE: Currently all our workspaces are under Premium Capacity.

    Error: “Underlying ErrorCapacity operation failed with error code InternalError”

    Regards,
    Haritha

    1. I’m not certain about the question since Power BI and Google Big Query are two separate services from different vendors. I’m sure there is a way to use GBQ data in a Power BI data model but I don’t know what the use case would be for that; maybe a DirectQuery connection? Sorry, I don’t know for sure.

  2. Hi Paul,

    Could you please advise how to enable and test “Perspectives” defined in a tabular model built in Visual Studio when published to Power BI Service. I’ve researched few blogs and could understand one way of testing the Perspectives is through “Analyze in Excel” which connects to the Power BI datasets. I’m more interested to know if we can test them directly on Power Service.

    I’m aware of the options for the models built in Power BI desktop directly but not for the Tabular models.

    1. Hello, Haritha
      After perspectives are added to a tabular model, they will work in any client tool when you use a live SSAS connection. In Power BI Desktop you can used the XMLA endpoint address to connect to the dataset as an SSAS database. If you connect to the model from Desktop as a Power BI dataset, perspectives are not used. The reason you are able to choose and switch perspectives in Excel is that Excel views a model is if it were a multidimensional cube and perspectives are part of the legacy SSAS multidimensional architecture.

      1. Thank you so much for the response Paul. I’m good to know about the connectivity of Tabular models through service now. Could you please advise if we need to enable XMLA properties to “Read/Write” for the perspectives to switch accordingly when chosen in creating a report on Power BI Service. Because I believe the link to talk to the tabular models while querying the dataset in service is only through XMLA end points?

  3. Hi Paul,
    we’ve been moving all our on-premises SSAS tabular models to PBI using XMLA endpoints, and so far has been a success, thank you for the information provided. We would like to expose the data via web services, do you know if is possible using these endpoints? Thank you

  4. I’m having trouble using Invoke-AsCmd with Service Principal credentials against the PBI XMLA Endpoint. Is it something related to my Orgs setup or is there something to be aware regarding the XMLA Endpoint the SP auth?

  5. I have been successful connecting Tableau just as a test, my company uses a lof of Tibco Spotfire any chance that visualization tool could connect? I’ve tried but it seems like it’s expecting a cube vs a tabular model. Maybe I need a different driver?

    1. Hi Colin, I have not yet used the XMLA endpoint to connect with Tableau. I do know that Tableau uses a legacy SSAS connector that expects a multidimensional model (cube) but this should work with a tabular model. Excel is the same way… it sees a Power BI dataset as a cube. To troubleshoot, try connecting your Tableau file to a local instance of the model in Power BI Desktop (you can used DAX Studio to get the localhost:). I’m curious to know if there is a compatibility limitation of some kind.

      1. Hi, yes I have no problem connecting Tableau, but I want to connect spotfire and it comes back with error “External error: The server sent an unrecognizable response.” when trying to add the data.

      2. I was able to connect to the local model, I wonder if it’s MFA that’s the problem 🙁

  6. Hi Paul, that was an excellent reading. I am trying all those new possibilities (since the XMLA endpoint read/write) on my Power BI dataset with Tabular Editor, so now I created perspectives and I would like to use them in a report. I thought I connect to this dataset with creating an report with an Analysis Services connector connecting to my dataset and I would expect a live connection. In Desktop that works fine (and with perspectives to!) but once I publish it to the Power BI Service (Premium Capacity) it gives me warnings about not refreshing and a not well configured gateway (?) and about credentials. Am I doing something that is not supported or should I continue troubleshooting and trying to fix the errors I get? Kind regards, Hanneke

  7. Hi Paul,

    XMLA feature allow to combine datasets together? or do we have any other options to combine datasets together and view/create report under power bi with a combined dataset?

  8. Hi Paul,

    Thank you for the excellent blog which addresses most of the queries. I too have an issue with a model developed in VS and deployed to Power BI Premium workspace is not displaying default aggregate functions like sum, average, min, max, count etc.., where as a model deployed to Embedded workspace with same tables has the aggregations appearing.

    Is that the workspace features between Premium Vs Embedded is making the difference. If so how would we address the issue or please advise what is making the difference in the model in displaying the aggregate functions in the model deployed to Power BI service.

    1. Hi Haritha, I need more information to understand your situation. Where do you not see the aggregate functions? In Power BI Desktop when creating a report? In SSDT?

      1. Hi Paul,

        Thank you so much for responding to my query.

        The model.bim file developed in VS as a tabular model project is published to Devops workspace on Power BI Service on which we do the reporting. I’m unable to find the default aggregate functions here where as a model build completely in a Power BI Desktop has all the features visible in service for reporting.

        Also we have a big model which is currently in development stage and are validating the model once published to the workspace in service which has both Premium and Embedded capacities. Surprisingly when a test model built with minimal set of tables and published to Embedded workspace I was able to see the default aggregate functions appearing but the same was not working with the model published to Premium workspace.

        Do we have any unsupported features between work-spaces like Premium vs. Embedded assuming this might be an issue. If not please advise on the root-cause of the issue.

      2. After you published the model to the workspace, where and how are you looking for the default aggregation? If we’re talking about the same thing, I think you are referring to the “Summarize By” property for a numeric column which should be available in a data model regardless of how it is created or where it is deployed.

      3. I’m unable to see the default functions when I right click on any numeric column/s of the table in the model while creating a report in service.

        It does not show me the list of functions which appear by default to calculate numeric/non-numeric columns. But for limited set of tables for which a test model was built I’m able to see the aggregate functions appearing on every column.

        But the model where I see an issue has 39 tables included and we are yet to add more number of tables to the same.

        And the “Summarize By” option was set to default for the numeric fields in model.bim file and setting the property of every column to just one function will limit the usage of other required functions which is not an expected behavior.

        Please advise if we are missing something.

      4. Hi Paul,

        Please advise on the issue related to missing default aggregate functions while creating a report in Service on a Tabular model built in VS.

  9. Paul First of all , this is really a nice and detailed article ..thanks for it
    Did you get a chance to implement RLS ,Object Level Security or Translations on the PBI model … I heard RLS , OLS and Translations are not yet supported in PBI Models.. .any thoughts?

  10. FYI, if anyone is getting the following error when deploying a model to the workspace via Visual Studio:

    “Cannot deploy metadata. Reason: An error occurred while connecting to the server”.

    I discovered that because my AAD account is MFA enabled, this will not work. I tried on the same work station/config using a non MFA enabled account and the push to the Workspace succeeded via Visual Studio.

    Workaround would be to open the model in TabularEditor and deploy to the PBI workspace from it.

    1. Update: I contacted Microsoft about this and they say MFA should be supported via VS and listed out things to check; however, after going through their checklist it still doesn’t work for my MFA enabled account. Maybe it’s something with my account but TabularEditor is able to deploy using my account. I entered a support ticket with them. I will update this comment in the future once it is resolved.

      https://github.com/MicrosoftDocs/powerbi-docs/issues/2379

      1. That’s unfortunate. That’s surprising because Verigence have been around for a long time and have made great products. Thanks for sharing this.

  11. To migrate a PBIS dataset to a VS project couldn’t you connect to the service using SSMS, script out the database to an xmla file, connect to your SSAS instance and run the xmla script to create the database in SSAS, then use the Import from Server feature in VS to create a new project?

  12. Paul – Could you provide few more details on how to import PBI Dataset into Visual studio? I tried using VS 2019 Import Tabular model from Server but this option come up with an error – Importing from Power BI End point is not supported.

    I’m able to connect using tabular editor & SSMS perfectly fine and also able to deploy tabular model onto PBI service from VS.

    I see that you’ve scripted out the PBI database and somehow imported that into Visual studio?

    Thanks

  13. Great blog Paul! Few queries:
    1. After deploying AAS model from Visual Studio to Premium workspace have you
    tried to schedule AAS partitions(or dataset partitions)? I can connect Premium
    workspace via SSMS and refresh manually but not able to schedule via
    Runbooks.
    2. For large IT models Visual Studio is a great tool for the data modeling. But Power
    BI desktop provides out of the box Incremental refresh feature. And in all
    Christian Wade’s demo he prefers desktop for modeling and not Visual Studio.
    Soon with ALM Toolkit and Tabular Editor added to Desktop should we prefer
    moving data modeling to Power BI desktop?

    Thanks!

    1. Sunny, there’s a lot of “it depends” to this question and the choices are not perfectly clear, with pros and cons either way. If the features of Power BI Desktop meet all of the requirements, it is a more streamlined and convenient tool to use. Incremental Refresh is an effective way to partition tables automatically but you cannot script or manually process any of those partitions. Using SSDT will give you that kind of flexibility but you lose the more modern feel of Power BI Desktop after making that transition. I also prefer to use Desktop but SSDT gives me more control for larger, formal IT data model projects.
      You can use PowerShell to automate partition processing and that script will run from Azure Data Factory.

      1. Paul, any chance you have a working example of processing partitions via PowerShell? The comment author (Sunny) reports his attempt using Runbooks did not work. I assume he took an example that processed an Azure SSAS model. Should such an example work?

      2. I have confirmed using the Azure Analysis Services PowerShell cmdlets do work for processing Power BI datasets. At first the commands were not working for me either but you have to be careful with the parameters, I was using some parameters that are for Multidimensional models only. RTFM. Once I figured this out it started working. I think the best approach would be to use the Invoke-ASCmd and pass it an XMLA script to process your desired partitions. This way you can process partitions in parallel. Maybe its possible to process in parallel using the Invoke-ProcessPartition cmdlet but I don’t know how to. Here are some links that helped me out.

        https://docs.microsoft.com/en-us/azure/analysis-services/analysis-services-powershell

        https://randombi.com/2020/04/27/migrating-from-azure-analysis-services-to-power-bi-premium/

        https://www.kasperonbi.com/processing-partitions-tables-in-parallel/

      3. Travis, I have seen this as well. I think that the product team came to the conclusion that there was no need to duplicate every operation with a CmdLet when each are already supported by an XMLA/TMSL command. Thanks for sharing this. I will quote your comment when I write the post about deployment and processing.

  14. Great article Paul, thanks! I am trying to find a way to develop composite models in Visual Studio 2019, with 1500 compatibility level set and the plan is to deploy to XMLA Endpoint in the Power BI Service…
    [1]
    I am battling to see how this works, is it not supported by VS? It seems that the entire model must either be Import or DirectQuery – a mix of the two does not seem to be available?
    [2]
    Additionally the “modern get data experience” seems to have a severely reduced set of connectors to choose from.

    Your input would be much appreciated!

    1. even though I am now using “Workspace database” mode in VS2019, I am unable to create a mix of Import and DirectQuery tables (like you can in Power BI Desktop) ,am I missing something here?

      Additionally I cannot simply start a “Blank Query” for Get Data (Power Query) – and if I modify an existing table query, or duplicate and modify with my native sql query it simply seems to get ignored and will not add to the model.

      Any advice on these Paul?

    2. HI Duncan, unfortunately hybrid models, dual-model tables, aggregations and composite models are all features that are exclusive to Power BI Desktop and aren’t supported by the SSDT designer. This wasn’t clear to me at first but I have clarification from the product team and this is the reality right now. Eventually, we should have support for these new features in third-party tools like Tabular Editor but likely not in SSDT.

  15. Paul, thanks for a great article. I was interested in your observation “The ability to use the endpoint to query one data model from another enables some very compelling composite model scenarios”

    Does this mean that the DAX in one cube could query data in another cube?

    1. John, yes. A query in one data model (published PBIX file) can run a DAX query that imports data into the data model from another published dataset. The query results are cached so it is not a live connection to the second data model. I am doing this in a project for a client and it is working nicely. Later this year we are likely to have live composite models but this is a good option when you just need to cache results or copy a table from another data model.

  16. Hi Paul – Excellent blog.
    I have used xmla end point to connect profiler and trace queries but important take away from this blog is using xmla write to deploy SSAS project onto PBI workspace and to import the dataset back into VS.

    Questions)

    1) I have used xmla endpoint within pbi desktop to combine a local datasource like csv and data from pbi dataset using xmla end point connect to dataset. I’m guessing this is the current workaround for Composite models feature being released in sept 2020? Do you have any more info about composite model feature n how will it be different to this?

    2) Do you recommend using Visual studio to begin creating ssas model as a starting point to be then deployed to pbi workspace? An incompatible feature list would be good to have .. example – roles that you mentioned.. they need to be recreated ?

    3) For a dataset having 50 dataflow with 50 entities ( 1 entity per dataflow) with direct query or live connection, will the usual processing options like process data + process recalc work with xmla end point?

    4) i suppose you dont plan on using release pipeline for deployments if VS is being used?

    Thanks

    1. Rakesh, these are great questions. I’ll comment on them int he same order:
      1) Using the XMLA endpoint, like any connection to Analysis Services, support a live connection only when it is the only connection in the model. Currently, in a composite model (when other query connections exist), only import model is supported. This means that DAX or MDX queries issued to SSAS, AAS or a published dataset via the XMLA endpoint will create a copy of that data in the new model. The product teams are working on adding support for DirectQuery connections to SSAS which is mentioned in the Power BI product roadmap: https://docs.microsoft.com/en-us/power-platform-release-plan/2020wave1/business-intelligence/planned-features

      2) With the ability to migrate from a .PBIX to .BIM (Desktop to Visual Studio), I don’t think it is necessary to start with Visual Studio. In many ways, Power BI Desktop is an easier tool and I prefer to start there. In a formal IT project, I would probably start with Visual Studio. I agree that a compatibility list would be great to have.

      3) I have to admit that I’m not certain about cross-compatibility with dataflows. As far as I know, using dataflows doesn’t diminish the ability to partition and process a model but I’m not certain and actually exploring this question for a client at the moment. I’ll be interested in your experience. Disregarding the dataflows question, you have all the processing options when using the XMLA endpoint.

      4) Release pipelines manage the lifecycle of workspaces and workspace apps that contain reports and dashboards in addition to datasets. Using the XMLA endpoint to manage dataset development & deployment could be used either WITH or INSTEAD OF datasets managed in a deployment pipeline. This is a compelling question and I think it will take some time to sort out the best practice.

Leave a Reply to DuncanCancel reply

Discover more from Paul Turley's SQL Server BI Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading