SSAS Tabular Models: The Good, the Bad, the Ugly & the Beautiful

After working on a couple of serious tabular model projects, I’ve been talking to other BI practitioners lately about their thoughts regarding tabular model technology and I’m finding that there is little consensus among all these industry experts about which option to recommend for new BI projects.  At SolidQ, I work with some of the most experienced Business Intelligence experts in the industry.  At conferences and in discussion groups, there has been a lot of discussion about the choice between Analysis Services multidimensional models and the newer tabular semantic model technology introduced in SQL Server 2012.  There are pros and cons for either side but I think there are some important factors to consider.  Among some of the most experienced Analysis Services consultants, I have heard that some are primarily using and recommending tabular for most new projects and others are arguing that tabular offers little significant value over the tried-and-true multidimensional cubes and MDX script solutions.

As is typical for a newer product, the usability experience for the model designer isn’t perfect.  In some ways, it’s different and just may take some getting used to, but in other ways there is clearly room for improvement.  The question now is; do the advantages of tabular vs multidimensional outweigh the somewhat rocky design experience?  I’m not taking sides in this comparison but merely offering an objective analysis of the state of tabular model option in SQL Server 2012 Analysis Services, Service Pack 1.  I expect this product to go through changes so for future reference, this information is for the released product at the time of this posting in June of 2013.

 

The Good

  • The core VertiPaq (or xVelocity) query and aggregation engine is stable & reliable.  Originally developed about five years ago and released with PowerPivot for Excel and SharePoint in SQL Server 2008R2 over three years ago, this technology has proven to be ready for serious use.

  • Under most conditions, for analytic reporting, data scanned and aggregated from an in-memory data structure performs faster than other conventional options; including relational and multidimensional storage.

  • Tabular can be less complex than multidimensional, OLAP SSAS.  The core design and usage concepts are easier for both those who design models and for those use use them for analysis and reporting.

  • Tabular models can be easier & faster to implement because the model structure is simpler and there may be fewer steps in the design process.

  • DAX, the core calculation expression language for tabular models, is fairly easy to learn.  Fundamental DAX expression concepts can be easier to understand than equivalent MDX commands used in multidimensional modeling and calculations.

 

The Bad 

  • Comparing features, tabular can be hard to justify when compared to multidimensional.  Traditional SSAS cubes still have more capabilities than tabular models and for someone who already has OLAP skills and background, tabular design is a new learning curve.

  • PowerPivot is a version 2 product.  As such, it’s simple and fairly reliable in the majority of design scenarios.  In some, mostly rare, cases, error handling and debugging capabilities aren’t as mature and robust as more tenured technologies like SSAS multidimensional or relational.

  • Tabular SSAS is a version 1 product.  As such, it has a list of well-known design annoyances and features that could (and should) improve in the next product go-round.

  • The recommended design patterns & best practices, both from Microsoft development and support teams and from members of the expert practitioner community,  are still emerging.                         

  • One model.bim file = one developer working at a time.  A departure from the long-time SSAS project development environment where every object (such as data sources, dimensions and cubes) were defined in isolated files managed by the project; SSAS tabular manages all of the database objects in a single model definition file.  The model designer treats the model is more of a black box than a solution comprised of independent objects.  However, the fact is that most of the same objects we work with in multidimensional/cube projects are all defined as XML elements in this file.   It can be opened and properties manipulated,and there are a growing number of third-party tools to provide enhancements.  Regardless, it is one big project file tat would need to be checked-out in source code management as a single project asset.

  • The tabular SSAS support community is thriving but still small.  A core group of trusted professionals from all over the world are the loudest voices right now.  They promote tabular solutions and provide active and collective support for one another.

  • The DAX expression editor in PowerPivot & the Visual Studio-based SSDT designer is quirky.  You have to know is strengths and weaknesses and be willing to use it in that context.  It attempts to assists with code completion but in the end, it doubles-up brackets and duplicates, rather then replaces old code, that it suggests.  No doubt that the experience will get better as enhancements are developed and released but we must live with a product today that is useful and reliable a lot of the time but, it leaves plenty of opportunity for improvements.

  • The entire tabular model must fit in memory.  There’s no caching, swapping or distributed processing option for large models.  This means that very large models need very large hardware and there is currently no scale-out option to distribute work loads or fail-over.  Got a lot of data?  Get a lot of memory.

 

The Ugly

  • After you get into serious data with 30, 40 or 50 tables and some complexity to your model, the version 1.0 SSDT tabular model designer can be cumbersome and error-prone.  I’ve seen it stop responding and throw errors when there were no errors.  I will say that it’s helpful and reliable most of the time but on occasion, when it falls down, I often save and close my work; shut down Visual Studio all together and the fire it back up.

  • My biggest peeve about the SSDT model designer is that all design work is performed while connected to the workspace database.  This means that for every table and column rename,and for every single property setting change, this information is written to the model workspace database on the local SSAS tabular server, and after the changes have been committed, control is given back to the model designer.

  • Some key SSAS features are not currently supported.  These include things like parent-child hierarchies, many-to-many relationships, cube actions and display folders.  Actually, this is not entirely true; actions and display folders can be added using after-market tools (like BIDS Helper, available in the CodePlex library) and by hand-writing the XMLA script, but they are not currently supported through the SSDT model designer.  There is simply a category of features that didn’t find their way into this first version of the product.  There are work-arounds and methods to enable these capabilities but they’re not supported, or at least not to the the same degree as they are in multidimensional SSAS.

 

The Beautiful

  • There is no doubt that in-memory, tabular model technology is the promise of the future.  It just makes sense.  Several vendors have come to the same conclusion and are developing products following this paradigm.  Oracle just made a big announcement about adding in-memory column store to their future 12C product. 

  • Data residing and processed in memory is faster than data residing in disk.  This is what VertiPaq does; whether implemented as PowerPivot, an SSAS tabular model or as a SQL Server column store, it works efficiently and elegantly without the complexities and overhead of indexes, partitions, file groups and other techniques typically used to optimize on-disk data stores.

  • Even though tabular is fairly new, many useful & valuable features are supported today and work well.

  • PowerPivot models upgrade seamlessly to tabular models.  This provides a path for business users and IT professionals to author models in familiar tools (Excel or Visual Studio) and then promote them to a server hosted environment.

  • Tabular models are managed and stored by SQL Server Analysis Services!  Although some components of the tabular engine and the designer are new and still have wrinkles to be ironed-out, the core product is based on the solid and time-tested foundation of SSAS.  This means that many of the features not implemented now will be available in future builds.

  • Client applications that support SSAS multidimensional will also support tabular.  In fact, any application built to work with SSAS cubes will natively work with PowerPivot and tabular as if it were a cube.  This is because SSAS tabular uses the same data provider that understands both MDX & DAX queries.

Digg This
Paul Turley

Paul Turley

Microsoft Data Platform MVP, Principal Consultant for 3Cloud Solutions Specializing in Business Intelligence, SQL Server solutions, Power BI, Analysis Services & Reporting Services.

10 thoughts on “SSAS Tabular Models: The Good, the Bad, the Ugly & the Beautiful

  1. Here it is, the year 2022, and Tabular is still lagging behind Multi-dimensional. 
    For complex, really well designed dimension, fact (measure) structures, Multi-dimensional is still superior. Sure, quick and dirty stuff may be faster in Tabular, but once they (business)  start to add requests, you’ll be looking for DAX commands that are already present in Multi-dimensional.  And if you have a well designed Multi-dimensional model, it will be faster.  

    Chris Webb, a Microsoft staffer for Analysis Services posted a query on what is preventing users from switching to Tabular.
    if you are looking for true comparison, read his blog:

    https://blog.crossjoin.co.uk/2020/07/12/migration-from-analysis-services-multidimensional-your-feedback-needed/

  2. In one word, if you try to implement any serious model in Excel PowerPivot (I’m talking about a dimension with 200,000 entities and a fact table that has about 300-500 million rows), just FORGET IT. If you try to develop any complex measures, Excel will give you enough grief to send you straight away to a laughing academy. Excel’s PowerPivot is BUGGY, to say the least. BUGGY. BUGGY. BUGGY. Like hell. And I’m talking about the latest version – 2016 (64-bit, of course). I’m saying this not to deter but to warn. I did try to create a model with some pretty sophisticated and complex measures for a rather big business… Excel gave me nothing but grief 🙁 Errors all over the place, even in places where you could logically prove there should be no error at all. I even found errors that later were confirmed by Microsoft to have been real bugs. So, if you have any serious work to do with tabular cubes/pivot tables, stay away from Excel. I’m now trying to implement the model in SSAS 2016. I just strongly hope it’ll be more robust… One more thing – this time about DAX. DAX is simple but it’s not easy. Let it sink in: simple but NOT EASY. When you try to work with it and create complex measures, you’ll feel the heat. DAX is nothing like any programming language you might have learned in the past. Don’t believe me? Try it yourself and create measures that behave well in all possible configurations of a pivot table… Hint: Learn what context transition is and how it works WELL.

  3. Thanks for your article Paul ¿are this concepts the same today or do you think that the current version resolve some of the ugly and bad things you mention?

    1. Good question and it’s probably time for me to post an update. I do most of my new semantic model development using Tabular today. The newer version shipped with SQL Server 2016 is far superior to the previous version. The development experience has improved and the product is just generally improved. One of the most significant barriers – lack of many-to-many support – has also been added. There are still some features exclusive to multidimensional but I believe that Tabular is the best direction for most new projects.

  4. Great job Paul – We’ve been using Tabular and PowerPivot heavily for the past year and a half and it has been great.

    However, we now have a strict requirement to implement column level security and it is forcing us to switch to multidimensional. For us, this is a problem because our customers have come to expect the fast implementation times that can be achieved using Tabular.

    Column level security is not possible in tabular without splitting columns into different tables and then filtering rows of the table containing the columns you want to hide (this does not hide the column itself, only it’s content). This method is a hack and does not scale.

    Jen – I would add that our “show stopper” for Direct Query mode was that Power View is not supported as it only speaks DAX/MDX and direct query uses SQL. This has only affected one of our projects where low data latency was very important.

  5. Paul – Great post. The sizing mentioned on the tabular model designer mentions 30-50 tables. Are there also issues with the database size, or just with that one specific design tool? I’ve also seen other posts where developers complain about general query performance over a certain size, and I was just wondering if this occurred consistently at a certain size, or if it was related to size as a percentage of memory, or if it was just a random bug. Thanks in advance!

  6. Excellent summary Paul – no doubts in my mind that the awesome SSAS engineering team is working hard to improve it. I’d love your two cents on SSAS Tabular Direct Query with Columnstore Index if you have worked with it. I have heard mixed experiences, usually groups do not use Direct Query after testing, but the concept seems promising longer term. It also would be nicer if additional data sources added to it.

Leave a Reply

Discover more from Paul Turley's SQL Server BI Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading