Incremental Refresh in Power BI, Part 3: Best Practices for Large Semantic Models

Incremental Refresh in Power BI, Best Practices for Large Semantic Models

In the two previous posts of the Incremental Refresh in Power BI series, we have learned what incremental refresh is, how to implement it, and best practices on how to safely publish the semantic model changes to Microsoft Fabric (aka Power BI Service). This post focuses on a couple of more best practices in implementing incremental refresh on large semantic models in Power BI.

Note

Since May 2023 that Microsoft announced Microsoft Fabric for the first time, Power BI is a part of Microsoft Fabric. Hence, we use the term Microsoft Fabric throughout this post to refer to Power BI or Power BI Service.

The Problem

Implementing incremental refresh on Power BI is usually straightforward if we carefully follow the implementation steps. However in some real-world scenarios, following the implementation steps is not enough. In different parts of my latest book, Expert Data Modeling with Power BI, 2’nd Edition, I emphasis the fact that understanding business requirements is the key to every single development project and data modelling is no different. Let me explain it more in the context of incremental data refresh implementation.

Let’s say we followed all the required implementation steps and we also followed the deployment best practices and everything runs pretty good in our development environment; the first data refresh takes longer, we we expected, all the partitions are also created and everything looks fine. So, we deploy the solution to production environment and refresh the semantic model. Our production data source has substantially larger data than the development data source. So the data refresh takes way too long. We wait a couple of hours and leave it to run overnight. The next day we find out that the first refresh failed. Some of the possibilities that lead the first data refresh to fail are Timeout, Out of resources, or Out of memory errors. This can happen regardless of your licensing plan, even on Power BI Premium capacities.

Another issue you may face usually happens during development. Many development teams try to keep their development data source’s size as close as possible to their production data source. And… NO, I am NOT suggesting using the production data source for development. Anyway, you may be tempted to do so. You set one month’s worth of data using the RangeStart and RangeEnd parameters just to find out that the data source actually has hundreds of millions of rows in a month. Now, your PBIX file on your local machine is way too large so you cannot even save it on your local machine.

This post provides some best practices. Some of the practices this post focuses on require implementation. To keep this post at an optimal length, I save the implementations for future posts. With that in mind, let’s begin.

Best Practices

So far, we have scratched the surface of some common challenges that we may face if we do not pay attention to the requirements and the size of the data being loaded into the data model. The good news is that this post explores a couple of good practices to guarantee smoother and more controlled implementation avoiding the data refresh issues as much as possible. Indeed, there might still be cases where we follow all best practices and we still face challenges.

Note

While implementing incremental refresh is available in Power BI Pro semantic models, but the restrictions on parallelism and lack of XMLA endpoint might be a deal breaker in many scenarios. So many of the techniques and best practices discussed in this post require a premium semantic model backed by either Premium Per User (PPU), Power BI Capacity (P/A/EM) or Fabric Capacity.

The next few sections explain some best practices to mitigate the risks of facing difficult challenges down the road.

Practice 1: Investigate the data source in terms of its complexity and size

This one is easy; not really. It is necessary to know what kind of beast we are dealing with. If you have access to the pre-production data source or to the production, it is good to know how much data will be loaded into the semantic model. Let’s say the source table contains 400 million rows of data for the past 2 years. A quick math suggests that on average we will have more than 16 million rows per month. While these are just hypothetical numbers, you may have even larger data sources. So having some data source size and growth estimation is always helpful for taking the next steps more thoroughly.

Practice 2: Keep the date range between the RangeStart and RangeEnd small

Continuing from the previous practice, if we deal with fairly large data sources, then waiting for millions of rows to be loaded into the data model at development time doesn’t make too much sense. So depending on the numbers you get from the previous point, select a date range that is small enough to let you easily continue with your development without needing to wait a long time to load the data into the model with every single change in the Power Query layer. Remember, the date range selected between the RangeStart and RangeEnd does NOT affect the creation of the partition on Microsoft Fabric after publishing. So there wouldn’t be any issues if you chose the values of the RangeStart and RangeEnd to be on the same day or even at the exact same time. One important point to remember is that we cannot change the values of the RangeStart and RangeEnd parameters after publishing the model to Microsoft Fabric.

Continue reading “Incremental Refresh in Power BI, Part 3: Best Practices for Large Semantic Models”

Microsoft Fabric: Automating Fabric Capacity Scaling with Azure Logic Apps


In a previous post I explained how to manage the capacity costs of a Fabric F capacity (under Pay-As-You-Go pricing model) using Logic Apps to Suspend and Resume it.

A customer who read my previous blog asked me “Can we use a similar method to scale up and down before and after specific workloads?”. This blog post is to answer exactly that.

I want to make some important points clear first and before we dig deeper into the solution:

  • The method described in this post works with Fabric F SKUs under Pay-As-You-Go pricing model.
  • If you have a Power BI Premium capacity, then this method is not valid for your case. But you might be interested in the autoscale option for Power BI Premium capacities.
  • Depending on your current workload, scaling down may not work due to resource unavailability.
  • Depending on your workload, this method may take a while to go through.
  • You need to be either a Capacity Admin or a Fabric Admin to successfully implement this method.
  • This method works based on user authentication, however, you may want to use Service Principal or Manage Identity which require more effort but could be a more desirable method in many scenarios.
  • This post explains a very basic scenario, you’re welcome to scale it to your specific needs.
  • You can consider this post as a continuation of the previous post. So if you are unsure you correctly understand what this blog is trying to explain, then I suggest you read my previous post first where I explain the Logic Apps implementation in more detail.

The Problem

I have an F Fabric capacity and I want to upscale it to an upper tier between the pick-time from 8 AM to 12 PM local time, then downscale it to its original tier.

The Solution

There are many ways to do this including using Azure Resource Manager APIs, Manage Azure Resources in PowerShell, or using Azure Resource Manager connector that can be used on Azure Logic Apps, Power Automate Premium, and Power Apps Premium. This post explores the use of Azure Resource Manager connectors in Azure Logic Apps. With that, let’s begin.

  1. On Azure Portal, search for Logic apps
  2. Select the Logic Apps service
Select Azure Logic Apps on Azure Portal
Select Azure Logic Apps on the Azure Portal
  1. Click the Add button
  2. Pick a Subscription from the list
  3. Pick a Resource Group from the list or create a new one
  4. Enter the Logic App name
  5. Select the Region from the list
  6. Select No if you do not require to Enable log analytics
  7. Select Consumption from the Plan type
  8. Click the Review + create button
Create new Logic Apps service on Azure Portal
Create new Logic Apps service on Azure Portal
  1. Click the Create button
Confirm creating new Logic Apps service
Confirm creating new Logic Apps service
Continue reading “Microsoft Fabric: Automating Fabric Capacity Scaling with Azure Logic Apps”

Microsoft Fabric: Overcome Reaching the Maximum Number of Fabric Trial Capacities

Microsoft Fabric Overcome Reaching the Maximum Number of Fabric Trial Capacities

If you are evaluating Microsoft Fabric and do not currently own a Premium Capacity, chances are you’re using Microsoft Fabric Trial Capacities. All Power BI users within an organisation or specific security groups given the rights can opt into Fabric Trial Capacities. Therefore, you may already have several Trial Fabric Capacities in your tenant. Your Fabric Administrators can specifically control who can opt into the Fabric Trial capacities within the Fabric Admin Portal, on the Help and support settings section, and enabling the Users can try Microsoft Fabric paid features setting as shown in the following image:

Enable Users can try Microsoft Fabric paid features for specific security groups via Fabric Admin Portal
Enable Users can try Microsoft Fabric paid features for specific security groups via Fabric Admin Portal

The authorised users can then opt into Fabric Trial by following this process:

  1. Click the Account Manager on the top right corner of the page
  2. Click the Start trial button
  3. Click the Start trial button again
  4. Provide the required details
  5. Click the Extend my free trial button

The following image shows the preceding steps:

Start Fabric Free Trial
Start Fabric Free Trial

As you see, opting into Fabric Trial is simple, unless it isn’t!

There are cases where authorised users cannot start their Fabric Trial because their tenant has already exceeded the limit of available trial capacities. In that case, the users get the following message:

Continue reading “Microsoft Fabric: Overcome Reaching the Maximum Number of Fabric Trial Capacities”

Microsoft Fabric: Capacity Cost Management Part 2, Automate Pause/Resume Capacity with Azure Logic Apps

Automate Pause Resume Suspend Fabric Capacity with Azure Logic Apps

In the previous blog post, I explained Microsoft Fabric capacities, shedding light on diverse capacity options and how they influence data projects. We delved into Capacity Units (CUs), pricing nuances, and practical cost control methods, including manually scaling and pausing Fabric capacity. Now, we’re taking the next step in our Microsoft Fabric journey by exploring the possibility of automating the pause and resume process. In this blog post, we’ll unlock the secrets to seamlessly managing your Fabric Capacity with automation that helps us save time and resources while optimising the usage of data and analytics workloads.

Right off the bat, this is a rather long blog, so I added a bonus section at the end for those who are reading from the beginning to the end. With that, let’s dive in!

The Problem

As we have learned in the previous blog post, one way to manage our Fabric capacity costs is to pause the capacity while not in use and resume it again when needed. While this can help with cost management, as it is a manual process, it is prone to human error, which makes it impractical in the long run.

The Solution

A more practical solution is to automate a daily process to pause and resume our Fabric capacity automatically. This can be done by running Azure Management APIs. Depending on our expertise, there are several ways to achieve the goal, such as running APIs on running the APIs via PowerShell (scheduling the runs separately), running the APIs via CloudShell, creating a flow in Power Automate, or creating the workflow in Azure Logic Apps. I prefer the latter, so it is the method that this blog post explains.

Automating Pause and Resume Fabric Capacity with Azure Logic Apps

Here is the scenario: we are going to create an Azure Logic Apps workflow that automatically does the following:

  • Check the time of the day
  • If it is between 8 am to 4 pm:
  • Check the status of the Fabric capacity
  • If the capacity is paused, then resume it, otherwise do nothing
  • If it is after 4 pm and before 8 am:
  • Check the status of the Fabric capacity
  • If the capacity is resumed, then pause it, otherwise do nothing

Follow these steps to implement the scenario in Azure Logic Apps:

  1. Login to Azure Portal and search for “Logic App
  2. Click the Logic App service
Finding Logic Apps on Azure Portal

This navigates us to the Logic App service. If you currently have existing Logic Apps workflows, they will appear here.

Continue reading “Microsoft Fabric: Capacity Cost Management Part 2, Automate Pause/Resume Capacity with Azure Logic Apps”