We’re on our way home from Lisbon, Portugal after speaking at SQL Saturday #267. Having had the pleasure to attend and speak at quite a few SQL Saturday events over the years, I’ve never seen a group of event organizers work so tirelessly with as much passion and dedication. Thanks to Niko Neugebauer, Vitor Pombeiro, Paulo Matos, Quilson Antunes, Murilo Miranda, André Batista and Paulo Borges for the late nights and long hours you all invested to make this a very special SQL Saturday. The event was a great success; as well as a special day of sightseeing the day afterward for all the speakers. After recruiting an impressive list of speakers from all over the globe, these volunteers went well beyond the call of duty to chauffer speakers from the airport to hotels, the event venues and around the city. It was quite a treat. Thank you!
Related Posts
DevOps & CI/CD for Power BI
DevOps isn’t difficult to implement for small and medium-scale projects, and simple things like managing version control in a code repository can save hours of lost time. Organization who are accustomed to managing large application development initiatives might expect to have a fully automated build and deployment process in concert with an Agile delivery process, managed with specialized tools like Jira, GitHub and Azure DevOps.

Developing Large Power BI Datasets – Part 3 – Detail Tables
Power BI is architected to consume data in a dimensional model, with narrow fact tables and related dimensions. Introducing a big, wide table in a tabular model is extremely inefficient. It takes up space and memory resources, impacts performance, and complicates measure coding. Flattening records into a flat table is one of the worst things you can do in Power BI and a common mistake made by novice Power BI users.
Developing Large Power BI Datasets – Part 2 – Partitioning
Table partitioning has long been an important task in semantic model design using SQL Server Analysis Services. Since SSAS Tabular and Power BI models are built on top of the SSAS architecture, the pattern of partitioning remains the same as it has been for twenty years. However, the specific methods for implementation have been fine-tuned and improved. The reasons for partitioning large fact tables mainly include:
Improve refresh speed,
Prevent reloading historical records,
Capture updated history,
Reduce database resource load
You don’t have to have massive tables to benefit from partitioning. Even tables with a few hundred thousand records can benefit from partitioning, to improve data refresh performance and to detect source data changes. There is little maintenance overhead, so the benefits usually outweigh the cost, in terms of effort and management.
Reblogged this on Paul te Braak and commented:
I can only express my appreciation also – thanks guys!