BISM–The BI Semantic Model… First Impressions

The public pre-release (CTP3) for SQL Server Denali has been available for about a month now and I have been putting much of my energy into working with the new BI Semantic Model, “BISM”.  In this short post, I’ll give my first impressions about this new technology, and will plan to and plan to follow-up with a more in-depth review with its capabilities, performance and design experience. 

The Microsoft product teams and marketing groups are still defining all the names and terminology for these new widgets so you’re going to hear new words and some of the old ones are likely to take on new meaning.  The first change is that what they have been referring to “BISM” in earlier, pre-release versions is now being referred to as the “Tabular Model”.  I’ve heard from some reliable sources the the term “BISM” will be used to refer to all of the semantic data technologies including OLAP cubes and PowerPivot.  It will be interesting to see how that plays out.  Regardless of what we call it, the new BISM or Tabular Model capabilities in SQL Server Denali are well-suited for large-scale and enterprise-class solutions.  Models are based on the new Vertipaq compression and storage technology employed in PowerPivot to cram lots of tabular data into an Excel workbook.  It will also be used to store structured data in the new “Apollo” in-column storage, in SQL Server Denali.

After loading a model with sample data, I loaded up some production data from a large retail company.  They had been experiencing query performance problems with distinct count measures in SSAS cubes.  Their MDX queries running on production servers were talking several minutes and in some cases, queries wouldn’t run at all.  I loaded 115 million fact rows, 12 million products and 1700 store locations and then duplicated one of their conditional distinct count calculation using DAX functions.  It took a little more than an hour to load all the data into the model.  The query time in Excel was about 5 to 10 seconds for each query operation.  This model used about 4 GB of RAM when the model was loaded.  Loading or “Importing” the model (the rough equivalent to processing a cube) would take up to 8 or 9 GB of RAM.  Running on development server with 16 GB of RAM and both the relational engine and the Vertipaq SSAS running on the same hardware, I had to throttle SQL Server to use no more than 6 GB of RAM.  Otherwise, the model import would run out of memory and fail.  After finding this balance, I found the model to be easy to manage and pretty snappy to query.

Great Performance

The next experiment was a set of bank loan data.  I only loaded about 500 thousand fact rows but this was a very wide table with over 200 columns.  I also loaded about 20 dimension/lookup tables and defined star schema relationships in the model designer.  A couple of these tables had a few thousand rows and the others were very small.  The data load, running in a VM on my laptop with 12 GB of RAM, took 2-3 minutes and the query times were a second or two.  The memory usage was negligible. I was quite pleased with my experience.

Designer Usability Issues

I found some usability issues with the model designer in CTP3.  All of the tables in my model had columns that needed to be renamed or hidden.  As a best practices, I would create views to hide or alias table columns but I just wanted to see how much of this I could do in the model designer.  For the small tables, this was no problem at all.  In the fact table with 200+ columns, making these changes was very slow and tedious.  If I were to select a range of columns and choose the context menu option to hide them and it would take up to a few minutes.  I could only select multiple columns in a range using the Shift key and could not use Ctrl to add individual columns to the selection.  In the graphical relationship modeler, moving a table can take several seconds.  When any changes are mode, all of the relationship lines are immediately redrawn, which takes time and a lot of CPU and video horsepower.  In many cases, the screen would be repositioned after moving and object making this a frustrating user experience.  I’ve posted two bugs to the Microsoft Connect site.  Please add your vote!

Paul Turley

Paul Turley

Microsoft Data Platform MVP, Principal Consultant for 3Cloud Solutions Specializing in Business Intelligence, SQL Server solutions, Power BI, Analysis Services & Reporting Services.

One thought on “BISM–The BI Semantic Model… First Impressions

  1. very interesting article, I was wondering if it is possible to determine the stats of user on a particular instance of a tabular model, for reporting purposes.


Leave a Reply

%d bloggers like this: