Slowly Changing Dimension (SCD) in Power BI, Part 2, Implementing SCD 1

Slowly Changing Dimension (SCD) in Power BI, Part 2, Implementing SCD 1

I explained what SCD means in a Business Intelligence solution in my previous post. We also discussed that while we do not expect to handle SCD2 in a Power BI implementation, we can handle scenarios similar to SCD1. In this post, I explain how to do so.

Scenario

We have a retail company selling products. The company releases the list of products in Excel format, including list price and dealer price, every year. The product list is released on the first day of July when the financial year starts. We have to implement a Power BI solution that keeps the latest product data to analyse the sales transactions. The following image shows the Product list for 2013:

Products List 2013 in Excel
Products List 2013

So each year, we receive a similar Excel file to the above image. The files are stored on a SharePoint Online site.

Scenario Explained

As the previous post explains, an SCD1 always keeps the current data by updating the old data with the new data. So an ETL process reads the data from the source, identifies the existing data in the destination table, inserts the new rows to the destination, updates the existing rows, and deletes the removed rows.

Here is why our scenario is similar to SCD1, with one exception:

  • We do not actually update the data in the Excel files and do not create an ETL process to read the data from the Excel files, identify the changes and apply the changes to an intermediary Excel file
  • We must read the data from the source Excel files, keep the latest data while filtering out the old ones and load the data into the data model.

As you see, while we are taking a very different implementation approach, the results are very similar with an exception: we do not delete any rows.

Implementation

Here is what we should do to achieve the goal:

  • We get the data in Power Query Editor using the SharePoint Folder connector
  • We combite the files
  • We use the ProductNumber column to identify the duplicated products
  • We use the Reporting Date column to identify the latest dates
  • We only keep the latest rows

Getting Data from SharePoint Online Folder

As we get the data from multiple files stored on SharePoint Online, we have to use the SharePoint Folder connector. Follow these steps:

  1. Login to SharePoint Online and navigate to the site holding the Product list Excel files and copy the site URL from the browser
Getting SharePoint Online Site URL
Getting SharePoint Online Site URL
  1. From the Get Data in the Power BI Desktop, select the SharePoint Folder connector
  2. Click Connect
Connecting to SharePoint Online Folder from Power BI
Connecting to SharePoint Online Folder from Power BI
  1. Paste the Site URL copied on step 1
  2. Click OK
Connecting to SharePoint Online Folder from Power BI using the SharePoint Folder connector
Connecting to SharePoint Online Folder from Power BI using the SharePoint Folder connector
  1. Click Transform Data
Transforming data in Power Query Editor
Transforming data in Power Query Editor
Continue reading “Slowly Changing Dimension (SCD) in Power BI, Part 2, Implementing SCD 1”

Thin Reports, Real-world Challenges

Power BI Thin Reports, Real-world Challenges

I previously explained in a blog post what thin reports are and why we should care about them. I also explained Report Level Measures in another blog post. In this post, I try to raise some real-world challenges we face when developing thin reports. I also provide a solution to those challenges.

Report Level Measure Related Challenges

Creating and using Report Level Measures is relatively easy, but there are some challenges that we face from time to time, such as:

  • Distinguishing Report Level Measures from Dataset Level Measures
  • Report Level Measure dependencies

Determining Report Level Measures from Dataset Level Measures

One of the challenges that Power BI Developers face is creating many report level measures. Unfortunately, Power BI Desktop currently uses the same iconography for both types of measures, making it hard to distinguish the actual measures created within the dataset from the report level measures. It gets even more challenging if we need to write technical documentation for an existing thin report. We have to open the PBIX file of the thin report in the Power BI Desktop and click every single measure. If the expression bar appears, the selected measure is a report level measure; otherwise, it is a dataset level measure.

So unless we use third-party tools, which I explain in this post, we must go through the manual process.

Report Level Measure dependencies

Another pain point related to the previous challenge is finding the dependencies between the report level measures. It is crucial to be aware of the interdependencies when doing impact analysis. We need to understand how a change in a report level measure impacts other report level measures. Again, Power BI Desktop does not currently have any options supporting that, so we have to click every measure and read through the DAX expressions to identify the dependencies or use the third-party tools to save development time.

Dataset and Thin Reports Dependency Challenges

The other challenges are even more difficult to overcome relate to interdependencies between datasets and thin reports. Power BI Service provides a lineage view that shows the dependencies between a dataset and its connected thin reports. But the challenges can get more complex to overcome manually. The following are some real-world examples of more complex situations:

  • What if we need to analyse the impact of changes in a dataset measure on all report level measures of the connected thin reports?
  • How do we analyse the impact of changes on a dataset measure on all connected thin reports, including the visuals, filters, etc…?
  • What if we need to tune the performance and we want to find a list of all unused tables or unused fields?

As you can see, the situation can get pretty complex, so manual operations are virtually impossible.

But there is a third party tool we can use which provides heaps of capabilities with a couple of clicks.

Continue reading “Thin Reports, Real-world Challenges”

Business Intelligence Components and How They Relate to Power BI

Business Intelligence Components and How They Relate to Power BI

When I decided to write this blog post, I thought it would be a good idea to learn a bit about the history of Business Intelligence. I searched on the internet, and I found this page on Wikipedia. The term Business Intelligence as we know it today was coined by an IBM computer science researcher, Hans Peter Luhn, in 1958, who wrote a paper in the IBM Systems journal titled A Business Intelligence System as a specific process in data science. In the Objectives and principles section of his paper, Luhn defines the business as “a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera.” and an intelligence system as “the communication facility serving the conduct of a business (in the broad sense)”. Then he refers to Webster’s dictionary’s definition of the word Intelligence as the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal”.

It is fascinating to see how a fantastic idea in the past sets a concrete future that can help us have a better life. Isn’t it precisely what we do in our daily BI processes as Luhn described of a Business Intelligence System for the first time? How cool is that?

When we talk about the term BI today, we refer to a specific and scientific set of processes of transforming the raw data into valuable and understandable information for various business sectors (such as sales, inventory, law, etc…). These processes will help businesses to make data-driven decisions based on the existing hidden facts in the data.

Like everything else, the BI processes improved a lot during its life. I will try to make some sensible links between today’s BI Components and Power BI in this post.

Generic Components of Business Intelligence Solutions

Generally speaking, a BI solution contains various components and tools that may vary in different solutions depending on the business requirements, data culture and the organisation’s maturity in analytics. But the processes are very similar to the following:

  • We usually have multiple source systems with different technologies containing the raw data, such as SQL Server, Excel, JSON, Parquet files etc…
  • We integrate the raw data into a central repository to reduce the risk of making any interruptions to the source systems by constantly connecting to them. We usually load the data from the data sources into the central repository.
  • We transform the data to optimise it for reporting and analytical purposes, and we load it into another storage. We aim to keep the historical data in this storage.
  • We pre-aggregate the data into certain levels based on the business requirements and load the data into another storage. We usually do not keep the whole historical data in this storage; instead, we only keep the data required to be analysed or reported.
  • We create reports and dashboards to turn the data into useful information

With the above processes in mind, a BI solution consists of the following components:

  • Data Sources
  • Staging
  • Data Warehouse/Data Mart(s)
  • Extract, Transform and Load (ETL)
  • Semantic Layer
  • Data Visualisation

Data Sources

One of the main goals of running a BI project is to enable organisations to make data-driven decisions. An organisation might have multiple departments using various tools to collect the relevant data every day, such as sales, inventory, marketing, finance, health and safety etc.

The data generated by the business tools are stored somewhere using different technologies. A sales system might store the data in an Oracle database, while the finance system stores the data in a SQL Server database in the cloud. The finance team also generate some data stored in Excel files.

The data generated by different systems are the source for a BI solution.

Staging

We usually have multiple data sources contributing to the data analysis in real-world scenarios. To be able to analyse all the data sources, we require a mechanism to load the data into a central repository. The main reason for that is the business tools required to constantly store data in the underlying storage. Therefore, frequent connections to the source systems can put our production systems at risk of being unresponsive or performing poorly. The central repository where we store the data from various data sources is called Staging. We usually store the data in the staging with no or minor changes compared to the data in the data sources. Therefore, the quality of the data stored in the staging is usually low and requires cleansing in the subsequent phases of the data journey. In many BI solutions, we use Staging as a temporary environment, so we delete the Staging data regularly after it is successfully transferred to the next stage, the data warehouse or data marts.

If we want to indicate the data quality with colours, it is fair to say the data quality in staging is Bronze.

Data Warehouse/Data Mart(s)

As mentioned before, the data in the staging is not in its best shape and format. Multiple data sources disparately generate the data. So, analysing the data and creating reports on top of the data in staging would be challenging, time-consuming and expensive. So we require to find out the links between the data sources, cleanse, reshape and transform the data and make it more optimised for data analysis and reporting activities. We store the current and historical data in a data warehouse. So it is pretty normal to have hundreds of millions or even billions of rows of data over a long period. Depending on the overall architecture, the data warehouse might contain encapsulated business-specific data in a data mart or a collection of data marts. In data warehousing, we use different modelling approaches such as Star Schema. As mentioned earlier, one of the primary purposes of having a data warehouse is to keep the history of the data. This is a massive benefit of having a data warehouse, but this strength comes with a cost. As the volume of the data in the data warehouse grows, it makes it more expensive to analyse the data. The data quality in the data warehouse or data marts is Silver.

Extract, Transfrom and Load (ETL)

In the previous sections, we mentioned that we integrate the data from the data sources in the staging area, then we cleanse, reshape and transform the data and load it into a data warehouse. To do so, we follow a process called Extract, Transform and Load or, in short, ETL. As you can imagine, the ETL processes are usually pretty complex and expensive, but they are an essential part of every BI solution.

Continue reading “Business Intelligence Components and How They Relate to Power BI”

Optimising OData Refresh Performance in Power Query for Power BI and Excel

OData has been adopted by many software solutions and has been around for many years. Most solutions are using the OData is to serve their transactional processes. But as we know, Power BI is an analytical solution that can fetch hundreds of thousands (or millions) rows of data in a single table. So, obviously, OData is not optimised for that kind of purpose. One of the biggest challenges many Power BI developers face when working with OData connections is performance issues. The performance depends on numerous factors such as the size of tables in the backend database that the OData connection is serving, peak read data volume over periods of time, throttling mechanism to control over-utilisation of resources etc…

So, generally speaking, we do not expect to get a blazing fast data refresh performance over OData connections, that’s why in many cases using OData connections for analytical tools such as Power BI is discouraged. So, what are the solutions or alternatives if we do not use OData connections in Power BI? Well, the best solution is to migrate the data into an intermediary repository, such as Azure SQL Database or Azure Data Lake Store or even a simple Azure Storage Account, then connect from Power BI to that database. We must decide on the intermediary repository depending on the business requirements, technology preferences, costs, desired data latency, future support requirement and expertise etc…

But, what if we do not have any other options for now, and we have to use OData connection in Power BI without blasting the size and costs of the project by moving the data to an intermediary space? And.. let’s face it, many organisations dislike the idea of using an intermediary space for various reasons. The simplest one is that they simply cannot afford the associated costs of using intermediary storage or they do not have the expertise to support the solution in long term.

In this post, I am not discussing the solutions involving any alternatives; instead, I provide some tips and tricks that can improve the performance of your data refreshes over OData connections in Power BI.

Notes

The tips in this post will not give you blazing-fast data refresh performance over OData, but they will help you to improve the data refresh performance. So if you take all the actions explained in this post and you still do not get an acceptable performance, then you might need to think about the alternatives and move your data into a central repository.

If you are getting data from a D365 data source, you may want to look at some alternatives to OData connection such as Dataverse (SQL Endpoint), D365 Dataverse (Legacy) or Common Data Services (CDS). But keep in mind, even those connectors have some limitations and might not give you an acceptable data refresh performance. For instance, Dataverse (SQL Endpoint) has 80MB table size limitation. There might be some other reasons for not getting a good performance over those connections such as having extra wide tables. Believe me, I’ve seen some tables with more than 800 columns.

Some suggestions in this post apply to other data sources and are not limited to OData connections only.

Suggestion 1: Measure the data source size

It is always good to have an idea of the size of the data source we are dealing with and OData connection is no different. In fact, the backend tables on OData sources can be wast. I wrote a blog post around that before, so I suggest you use the custom function I wrote to understand the size of the data source. If your data source is large, then the query in that post takes a long time to get the results, but you can filter the tables to get the results quicker.

Suggestion 2: Avoid getting throttled

As mentioned earlier, many solutions have some throttling mechanisms to control the over-utilisation of resources. Sending many API requests may trigger throttling which limits our access to the data for a short period of time. During that period, our calls are redirected to a different URL.

Tip 1: Disabling Parallel Loading of Tables

One of the many reasons that Power BI requests many API calls is loading the data into multiple tables in Parallel. We can disable this setting from Power BI Desktop by following these steps:

  1. Click the File menu
  2. Click Options and settings
  3. Click Options
  4. Click the Data Load tab from the CURREN FILE section
  5. Untick the Enable parallel loading of tables option
Disabling Parallel Loading of Tables in Power BI
Disabling Parallel Loading of Tables in Power BI Desktop
Continue reading “Optimising OData Refresh Performance in Power Query for Power BI and Excel”