Power BI Desktop Versions Demystified: Part 1, Power BI Desktop and Power BI Desktop RS

If you are a Power BI power user, you may have wondered: how many versions of Power BI Desktop are there? The quick answer is: it depends!

Depending on your organisation’s preferences, data governance requirements, and the platforms you intend to use for report deployment you may use either Power BI Desktop, the “standard version”, or Power BI Desktop RS (Report Server). Power BI Desktop has variations tailored to meet specific needs, such as cloud-based analytics or on-premises reporting. While many users might only encounter the standard version, there’s another important variant for specialised scenarios.

Power BI Desktop comes in two primary versions:

  1. Power BI Desktop:
    This is the standard version most users rely on. It’s the go-to tool for transforming data, creating semantic models, and building interactive reports. This version is designed to seamlessly integrate with the Power BI Service hosted on Microsoft Fabric, enabling cloud-based sharing, collaboration, and advanced features like Direct LakeAI-driven insights, and more. Regular updates ensure that this version includes the latest features and innovations, such as new Power Query and DAX functions, enhanced visuals, and cutting-edge integrations.
  1. Power BI Desktop RS (Report Server):
    This is a specialised version of Power BI Desktop designed to work exclusively with Power BI Report Server, a locally hosted reporting platform. It is tailored for organisations that prefer to keep their data and reports on-premises due to regulatory, security, or strategic reasons, avoiding reliance on cloud services like the Power BI Service on Microsoft Fabric. Although the two versions look nearly identical in functionality, they serve distinct purposes. Power BI Desktop RS is specifically aligned with the capabilities of Power BI Report Server, supporting features available up to the latest release cycle of the server. For instance, Power BI Desktop RS updates are less frequent; typically released every few months, in line with Power BI Report Server’s update schedule; making it slightly behind the standard version in terms of cutting-edge features. However, it ensures stability and compatibility for on-premises deployments.
Continue reading “Power BI Desktop Versions Demystified: Part 1, Power BI Desktop and Power BI Desktop RS”

Microsoft Fabric: Troubleshooting Query Parameters in Published Semantic Models

Microsoft Fabric: Troubleshooting Query Parameters in Published Semantic Models

Power Query is a powerful tool within the Microsoft Fabric environment, enabling users to manage data sources and transform data efficiently. However, a common issue you may face is that after publishing the Semantic Model, the Power Query parameters either do not appear or are greyed out, making them non-editable. In this post and its accompanying YouTube video, I’ll walk you through the steps to diagnose and fix these problems, ensuring that your parameters work as expected in your published semantic models.

Why Do Power Query Parameters Become Unavailable?

There are a few reasons why your Power Query parameters might not appear or be editable after you’ve published your report to Microsoft Fabric. These issues generally relate to either the way the parameters are set up within Power Query in Power BI Desktop or how they interact with the data sources.

Common Cause and Fix

1. Parameter Data Type in Power Query

One of the most common reasons your parameters might be greyed out or non-editable is due to the parameters’ data types defined in Power Query within Power BI Desktop. If your parameters are of type any, then they won’t show up, or they are read-only (greyed out). The fixation is easy:

Continue reading “Microsoft Fabric: Troubleshooting Query Parameters in Published Semantic Models”

Microsoft Fabric: Capacity Cost Management Part 3, Pause Capacity During Christmas with Azure Logic Apps

Microsoft Fabric: Capacity Cost Management Part 3, Pause Capacity During Christmas with Azure Logic Apps

In the first blog post of this series, I explained that we can Pause and Resume a Microsoft Fabric capacity from Azure Portal. In the second blog and its accompanying YouTube video, I showed you how to automate the Pause and Resume actions in Azure LogicApps so the capacity starts at 8:00 AM and stops at 4:00 PM. While I have already mentioned in those posts, it is worthwhile to mention again that these methods only make sense for PAYG (Pay-As-You-Go) capacities and NOT the Reservation capacities. While the method works fine, you may need more fine-tuning.

Managing operational costs becomes crucial for businesses leveraging Microsoft Fabric capacities when the holiday season approaches. This presents a unique challenge of maintaining efficiency while reducing unnecessary expenses, especially during Christmas when business operations might slow down or pause entirely.

In this post and video, I will extend the discussions from my previous blog and demonstrate how to optimise your Azure Logic Apps to manage Microsoft Fabric capacity during the Christmas holidays.

Extending the Logic Apps Workflow

Existing Setup Recap

In earlier discussions, we’ve explored using Azure Logic Apps to manage Fabric capacity effectively from 8:00 AM to 4:00 PM on regular business days and pausing operations afterwards. This setup ensures that we’re not incurring costs when the capacity isn’t needed, particularly from 4:00 PM to 8:00 AM the next morning, and throughout the weekends. I encourage you to check out my previous post for more information. This is how the existing solution looks like in Azure LogicApps:

Automating Microsoft Fabric Capacity with Azure LogicApps
Automating Microsoft Fabric Capacity with Azure LogicApps

Incorporating Holiday Schedules

The key to extending this setup for the Christmas period lies in integrating specific holiday schedules into your existing workflows using Workflow Definition Language which is used in Azure Logic Apps and Microsoft Flow. The following expression determines if the current date (in New Zealand Standard Time) falls within the period from December 25th of the current year to January 2nd of the next year:

and(
    greaterOrEquals(
        int(
            formatDateTime(
                convertFromUtc(
                    utcNow(), 
                    'New Zealand Standard Time'
                ), 
                'yyyyMMdd'
            )
        ), 
        int(
   concat(
    formatDateTime(
     utcNow()
     , 'yyyy'
     )
    , '1225'
    )
   ) 
    ), 
    lessOrEquals(
        int(
            formatDateTime(
                convertFromUtc(
                    utcNow(), 
                    'New Zealand Standard Time'
                ), 
                'yyyyMMdd'
            )
        ), 
        int(
   concat(
    add(
     int(
      formatDateTime(
       utcNow()
       , 'yyyy'
       )
      )
     ,1
     )
    , '0102'
    )
   )
  )
)

The following section explains how the expression works.

Continue reading “Microsoft Fabric: Capacity Cost Management Part 3, Pause Capacity During Christmas with Azure Logic Apps”

Incremental Refresh in Power BI, Part 3: Best Practices for Large Semantic Models

Incremental Refresh in Power BI, Best Practices for Large Semantic Models

In the two previous posts of the Incremental Refresh in Power BI series, we have learned what incremental refresh is, how to implement it, and best practices on how to safely publish the semantic model changes to Microsoft Fabric (aka Power BI Service). This post focuses on a couple of more best practices in implementing incremental refresh on large semantic models in Power BI.

Note

Since May 2023 that Microsoft announced Microsoft Fabric for the first time, Power BI is a part of Microsoft Fabric. Hence, we use the term Microsoft Fabric throughout this post to refer to Power BI or Power BI Service.

The Problem

Implementing incremental refresh on Power BI is usually straightforward if we carefully follow the implementation steps. However in some real-world scenarios, following the implementation steps is not enough. In different parts of my latest book, Expert Data Modeling with Power BI, 2’nd Edition, I emphasis the fact that understanding business requirements is the key to every single development project and data modelling is no different. Let me explain it more in the context of incremental data refresh implementation.

Let’s say we followed all the required implementation steps and we also followed the deployment best practices and everything runs pretty good in our development environment; the first data refresh takes longer, we we expected, all the partitions are also created and everything looks fine. So, we deploy the solution to production environment and refresh the semantic model. Our production data source has substantially larger data than the development data source. So the data refresh takes way too long. We wait a couple of hours and leave it to run overnight. The next day we find out that the first refresh failed. Some of the possibilities that lead the first data refresh to fail are Timeout, Out of resources, or Out of memory errors. This can happen regardless of your licensing plan, even on Power BI Premium capacities.

Another issue you may face usually happens during development. Many development teams try to keep their development data source’s size as close as possible to their production data source. And… NO, I am NOT suggesting using the production data source for development. Anyway, you may be tempted to do so. You set one month’s worth of data using the RangeStart and RangeEnd parameters just to find out that the data source actually has hundreds of millions of rows in a month. Now, your PBIX file on your local machine is way too large so you cannot even save it on your local machine.

This post provides some best practices. Some of the practices this post focuses on require implementation. To keep this post at an optimal length, I save the implementations for future posts. With that in mind, let’s begin.

Best Practices

So far, we have scratched the surface of some common challenges that we may face if we do not pay attention to the requirements and the size of the data being loaded into the data model. The good news is that this post explores a couple of good practices to guarantee smoother and more controlled implementation avoiding the data refresh issues as much as possible. Indeed, there might still be cases where we follow all best practices and we still face challenges.

Note

While implementing incremental refresh is available in Power BI Pro semantic models, but the restrictions on parallelism and lack of XMLA endpoint might be a deal breaker in many scenarios. So many of the techniques and best practices discussed in this post require a premium semantic model backed by either Premium Per User (PPU), Power BI Capacity (P/A/EM) or Fabric Capacity.

The next few sections explain some best practices to mitigate the risks of facing difficult challenges down the road.

Practice 1: Investigate the data source in terms of its complexity and size

This one is easy; not really. It is necessary to know what kind of beast we are dealing with. If you have access to the pre-production data source or to the production, it is good to know how much data will be loaded into the semantic model. Let’s say the source table contains 400 million rows of data for the past 2 years. A quick math suggests that on average we will have more than 16 million rows per month. While these are just hypothetical numbers, you may have even larger data sources. So having some data source size and growth estimation is always helpful for taking the next steps more thoroughly.

Practice 2: Keep the date range between the RangeStart and RangeEnd small

Continuing from the previous practice, if we deal with fairly large data sources, then waiting for millions of rows to be loaded into the data model at development time doesn’t make too much sense. So depending on the numbers you get from the previous point, select a date range that is small enough to let you easily continue with your development without needing to wait a long time to load the data into the model with every single change in the Power Query layer. Remember, the date range selected between the RangeStart and RangeEnd does NOT affect the creation of the partition on Microsoft Fabric after publishing. So there wouldn’t be any issues if you chose the values of the RangeStart and RangeEnd to be on the same day or even at the exact same time. One important point to remember is that we cannot change the values of the RangeStart and RangeEnd parameters after publishing the model to Microsoft Fabric.

Continue reading “Incremental Refresh in Power BI, Part 3: Best Practices for Large Semantic Models”