Semantic Models for Humans and Robots: Enabling Copilot for Self-service Reporting

How do Copilot and Agentic AI change the way business users will interact with organizational data? Is the platform ready and are we ready to embrace it? AI agents and assistants are part of our daily lives. LLM-driven agents like Alexa, Siri, Gemini and Grok provide entertainment, perform requested tasks and answer questions with varying degrees of accuracy and reliability. Sometimes they misunderstand and provide out-of-left-field responses for simple questions. I might ask: “Show me the best practices for my model.” and get: “Of course! Here’s a detailed guide on runway walking techniques and how to maintain a confident posture during a fashion show.”

This series of posts will focus on how to prepare Microsoft Fabric and Power BI semantic models to support effective self-service reporting and analysis using Copilot.

Using Tableau with Microsoft Fabric & RLS

I’m a big fan of Power BI but that doesn’t mean that I’m not open to using different reporting and analytical tools in an enterprise modern data platform. People love Tableau with its long history as a leading visual dashboard and report design tool. Is Tableau an effective dashboarding and analytic reporting tool for a solution built using Microsoft Fabric? As I investigated this question for a new consulting client, my goal was to put aside any product prejudice I might have and approach the question with as little bias as possible.
The purpose of this article is not to compare Tableau to Power BI nor to expound on the strengths or perceived weaknesses of either product, but to share my experience and learnings about using Tableau with Microsoft Fabric and Power BI semantic models. I will demonstrate how to use Tableau with a Direct Lake semantic model, a large import mode semantic model and how effectively Tableau works securely with semantic model-based row-level security (RLS).

Preparing Power BI and Fabric for AI & Copilot

It is no secret that the AI revolution has started. Most every modern business application has an agent, a copilot…

Setting up a GitHub Repo and Power BI Project

Both Azure DevOps and GitHub are supported Git hosts for Power BI and Fabric workspace integration. I will demonstrate using…

Delivering Enterprise and Self-service Power BI with Microsoft Fabric

The term “architecture” is more commonly used in the realm of data engineering and data warehouse project work, but the concept applies to BI and analytic reporting projects of all sizes.

Like the architecture of a building, a complete Business Intelligence architecture contains the foundation and structure of your solution. Using the building analogy, a data platform can take on many forms, like a single-story cottage, a sprawling university campus or a towering skyscraper. For the data platform, the foundation is the selection of source data that are shaped, cleansed and transformed for reporting and analysis.

Comparing query design efficiencies in Power BI, with text files, lakehouse & database tables

I wanted to share the results of a few experiments I recent conducted with one of my favorite sets of sample data. This will take at least two blog posts to cover, but I will summarize them here:

Compare data load & transformations with CSV files vs a Fabric lakehouse using the SQL Server connector:
Loading 20 million fact rows from CSV files vs a Fabric lakehouse, using Power Query.
Same comparison with deployed Power BI model.

Comparing Fabric data transformation options & performance:
Loading & transforming the same data with Power BI Desktop, Fabric Gen2 dataflows, Fabric pipelines and Spark notebooks.

Comparing semantic model performance in Fabric and Power BI:
Report & semantic model performance comparing the same data in Import mode, DirectQuery and Direct Lake.

Continuous Delivery & Version Control for Power BI

For the business intelligence professional, DevOps can be a confusing topic because it intersects with many disciplines and philosophies. I’m hopeful that a bit of reflection on my own experience over the years provides some valuable insight. I have seen DevOps and version control implemented on dozens and perhaps hundreds of projects, with different degrees of sophistication and control.

If you work in a software development environment where DevOps is practiced as part of your team’s development culture, you know that DevOps truly is a holistic methodology, including practices can be very extensive and regimented. Software development tends to be a linear process, whereas data and BI projects are more iterative in nature. Frankly, DevOps purists can be down-right militant about enforcing all the rules, which many BI specialists tend to skirt, in an effort to move quickly.

If you are a business intelligence analyst or developer and suddenly find yourself working on a team where DevOps is practiced, you will likely find the process to be more strict and less flexible than typical ad hoc BI development. The key is to strike the right balance for your organization’s needs.

There is certainly a flavor of DevOps that seem to be over-engineered and counter-productive. DevOps practices exist because they address critical needs in a software development environment, but you should find the right balance for your organization’s project needs. Be mindful that the very purpose for business intelligence is to deliver insights and reporting insights directly to business users, which entails quick iterations through the entire process – from requirement refinement to report delivery, with several steps in-between.

Direct Lake memory: hotness, popularity & column eviction

I just read that the Miss Universe contestant from Panama was evicted from the Miss Universe pageant. I don’t know…

Doing Power BI the Right Way in 2025

I work with hundreds of consulting clients who go through the same cycles, having the same experiences, facing the same challenges, many making the same mistakes, and many learning some of the same lessons. The purpose of this series is to share those lessons with you.

Universal Best Practices

If we sum up Power BI best practices at the highest level, it is that all projects of any scale and size should adhere to the same general set of guidelines, categorically. Your Power BI project no matter how small or large should address every category in the following list. The specific design patterns and practices will vary significantly depending on scale and purpose. The following guideline categories will frame this and related future posts in this series:

Microsoft Fabric Project Advice: Getting into the Thick of It

Advice and lessons learned from working with consulting clients for nearly one year since Fabric was released. How to approach architecting solutions, choosing from among Fabric assets and tools, and getting your organization ready for Fabric adoption, developing, managing and deploying successful Fabric solutions.

Microsoft Fabric Roadmap & Feature Status Report

I created this report to summarize the release status for all Fabric features that are documented in the Microsoft Fabric Roadmap. The information on this report is collected and updated frequently from the Fabric Roadmap hosted by Microsoft, and displays status information in a convenient, single-page Power BI report. Each feature area on the report has links to the detail documentation on the official roadmap site. For convenience, I’ve shortened the report path to TinyURL.com/FabricRoadmapReport.

Power BI Direct Lake and DirectQuery in the Age of Fabric

I just returned from the Microsoft Fabric Community Conference in Las Vegas. Over 4,000 attendees saw a lot of demos showing how to effortlessly build a modern data platform with petabytes of data in One Lake, and then ask CoPilot to generate beautiful Power BI reports from semantic models that magically appear from data in a Fabric Lakehouse. Is Direct Lake the silver bullet solution that will finally deliver incredibly fast analytic reporting over huge volumes of data in any form, in real time? Will Direct Lake models replace Import model and solve the dreaded DirectQuery mode performance problems of the past? The answer is No, but Direct Lake can break some barriers. This post is a continuation of my previous post titled “Moving from Power BI to Microsoft fabric”.

Direct Lake is a new semantic model storage mode introduced in Microsoft Fabric, available to enterprise customers using Power BI Premium and Fabric capacities. It is an extension of the Analysis Services Vertipaq in-memory analytic engine that reads data directly from the Delta-parquet structured storage files in a Fabric lakehouse or warehouse.

Moving from Power BI to Microsoft Fabric

Fabric is here but what does that mean if you are using Power BI? What do you need to know and what, if anything will you need to change if you are a Power BI report designer, developer or BI solution architect? What parts of Fabric should you use now and how do you plan for the near-term future? As I write this in March of 2024, I’m at the Microsoft MVP Summit at the Microsoft campus in Redmond, Washington this week learning about what the product teams will be working on over the next year or so. Fabric is center stage in every conversation and session. To say that Fabric has moved my cheese would be a gross understatement. I’ve been working with data and reporting solutions for about 30 years and have seen many products come and go. Everything I knew about working with databases, data warehouses, transforming and reporting on data has changed recently BUT it doesn’t mean that everyone using Power BI must stop what they are doing and adapt to these changes. The core product is unchanged. Power BI still works as it always has.

The introduction of Microsoft Fabric in various preview releases over the past two years have immersed me into the world of Spark, Python, parquet-Delta storage, lakehouses and medallion data warehouse architectures. These technologies, significantly different from the SQL Server suite of products I’ve known and loved for the past twenty years, represent a major shift in direction, forming the backbone of OneLake; Microsoft’s universal integrated data platform that hosts all the components comprising Fabric. They built all of Fabric on top of the existing Power BI service, so all of the data workloads live inside familiar workspaces, accessible through the Power BI web-based portal (now called the Fabric portal).

CI/CD & DevOps for Power BI… Are We There Yet?

In my view, projects and teams of different sizes have different needs. I described DevOps maturity as a pyramid, where most projects don’t require a sophisticated DevOps implementation, and the most complex solutions do. The DevOps maturity is a progression, but only for projects of a certain scale. One of the following options might simply be the best fit for a particular project.
Unless you are throwing together a simple Power BI report that you don’t plan to maintain and add features to, the first and most basic managed project should start with a PBIX file or Power BI Project folder stored in a shared and cloud-backed storage location.
DevOps isn’t a requirement for all projects, but version control and shared file storage definitely is.

Power BI for the Enterprise Precon: Atlanta Feb 9th

Please come to the Atlanta SQL Saturday BI Edition in February, and join me for a full-day preconference event: Designing and Developing Enterprise-scale Solutions with Power BI