Business Intelligence Components and How They Relate to Power BI

Business Intelligence Components and How They Relate to Power BI

When I decided to write this blog post, I thought it would be a good idea to learn a bit about the history of Business Intelligence. I searched on the internet, and I found this page on Wikipedia. The term Business Intelligence as we know it today was coined by an IBM computer science researcher, Hans Peter Luhn, in 1958, who wrote a paper in the IBM Systems journal titled A Business Intelligence System as a specific process in data science. In the Objectives and principles section of his paper, Luhn defines the business as “a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera.” and an intelligence system as “the communication facility serving the conduct of a business (in the broad sense)”. Then he refers to Webster’s dictionary’s definition of the word Intelligence as the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal”.

It is fascinating to see how a fantastic idea in the past sets a concrete future that can help us have a better life. Isn’t it precisely what we do in our daily BI processes as Luhn described of a Business Intelligence System for the first time? How cool is that?

When we talk about the term BI today, we refer to a specific and scientific set of processes of transforming the raw data into valuable and understandable information for various business sectors (such as sales, inventory, law, etc…). These processes will help businesses to make data-driven decisions based on the existing hidden facts in the data.

Like everything else, the BI processes improved a lot during its life. I will try to make some sensible links between today’s BI Components and Power BI in this post.

Generic Components of Business Intelligence Solutions

Generally speaking, a BI solution contains various components and tools that may vary in different solutions depending on the business requirements, data culture and the organisation’s maturity in analytics. But the processes are very similar to the following:

  • We usually have multiple source systems with different technologies containing the raw data, such as SQL Server, Excel, JSON, Parquet files etc…
  • We integrate the raw data into a central repository to reduce the risk of making any interruptions to the source systems by constantly connecting to them. We usually load the data from the data sources into the central repository.
  • We transform the data to optimise it for reporting and analytical purposes, and we load it into another storage. We aim to keep the historical data in this storage.
  • We pre-aggregate the data into certain levels based on the business requirements and load the data into another storage. We usually do not keep the whole historical data in this storage; instead, we only keep the data required to be analysed or reported.
  • We create reports and dashboards to turn the data into useful information

With the above processes in mind, a BI solution consists of the following components:

  • Data Sources
  • Staging
  • Data Warehouse/Data Mart(s)
  • Extract, Transform and Load (ETL)
  • Semantic Layer
  • Data Visualisation

Data Sources

One of the main goals of running a BI project is to enable organisations to make data-driven decisions. An organisation might have multiple departments using various tools to collect the relevant data every day, such as sales, inventory, marketing, finance, health and safety etc.

The data generated by the business tools are stored somewhere using different technologies. A sales system might store the data in an Oracle database, while the finance system stores the data in a SQL Server database in the cloud. The finance team also generate some data stored in Excel files.

The data generated by different systems are the source for a BI solution.

Staging

We usually have multiple data sources contributing to the data analysis in real-world scenarios. To be able to analyse all the data sources, we require a mechanism to load the data into a central repository. The main reason for that is the business tools required to constantly store data in the underlying storage. Therefore, frequent connections to the source systems can put our production systems at risk of being unresponsive or performing poorly. The central repository where we store the data from various data sources is called Staging. We usually store the data in the staging with no or minor changes compared to the data in the data sources. Therefore, the quality of the data stored in the staging is usually low and requires cleansing in the subsequent phases of the data journey. In many BI solutions, we use Staging as a temporary environment, so we delete the Staging data regularly after it is successfully transferred to the next stage, the data warehouse or data marts.

If we want to indicate the data quality with colours, it is fair to say the data quality in staging is Bronze.

Data Warehouse/Data Mart(s)

As mentioned before, the data in the staging is not in its best shape and format. Multiple data sources disparately generate the data. So, analysing the data and creating reports on top of the data in staging would be challenging, time-consuming and expensive. So we require to find out the links between the data sources, cleanse, reshape and transform the data and make it more optimised for data analysis and reporting activities. We store the current and historical data in a data warehouse. So it is pretty normal to have hundreds of millions or even billions of rows of data over a long period. Depending on the overall architecture, the data warehouse might contain encapsulated business-specific data in a data mart or a collection of data marts. In data warehousing, we use different modelling approaches such as Star Schema. As mentioned earlier, one of the primary purposes of having a data warehouse is to keep the history of the data. This is a massive benefit of having a data warehouse, but this strength comes with a cost. As the volume of the data in the data warehouse grows, it makes it more expensive to analyse the data. The data quality in the data warehouse or data marts is Silver.

Extract, Transfrom and Load (ETL)

In the previous sections, we mentioned that we integrate the data from the data sources in the staging area, then we cleanse, reshape and transform the data and load it into a data warehouse. To do so, we follow a process called Extract, Transform and Load or, in short, ETL. As you can imagine, the ETL processes are usually pretty complex and expensive, but they are an essential part of every BI solution.

Continue reading “Business Intelligence Components and How They Relate to Power BI”

Incremental Refresh in Power BI, Part 1: Implementation in Power BI Desktop

Incremental-Refresh-in-Power-BI-Part-1-Implementation-in-Power-BI-Desktop

Incremental refresh, or in short, IR, refers to loading the data incrementally, which has been around in the world of ETL for data warehousing for a long time. Let us discuss incremental refresh (or incremental data loading) in a simple language to better understand how it works.

From a data movement standpoint, there are always two options when we transfer data from location A to location B:

  1. Truncation and load: We transfer the data as a whole from location A to location B. If location B has some data already, we entirely truncate the location B and reload the whole data from the location A to B
  2. Incremental load: We transfer the data as a whole from location A to location B just once for the first time. The next time, we only load the data changes from A to B. In this approach, we never truncate B. Instead, we only transfer the data that exists in A but not in B

When we refresh the data in Power BI, if we have not configured an incremental refresh, we use the first approach, which is truncation and load. Needless to say that in Power BI, the first approach only applies to tables with Import or Dual storage modes. Previously, the Incremental load was available only in the tables with either Import or Dual storage modes. But the new announcement from Microsoft about Hybrid Tables makes a big difference in how Incremental load works. With the Hybrid Tables, the Incremental load is available on a portion of the table when a specific partition is in Direct Query mode, while the rest of the partitions are in Import storage mode.

Incremental refresh used to be available only on Premium capacities, but from Feb 2020 onwards, it is also available in Power BI Pro with some limitations. However, the Hybrid Tables are currently available on Power BI Premium Capacity and Premium Per User (PPU) and not Pro. Let’s hope that Microsft will change its licensing plan for the Hybrid Tables in the future and make it available in Pro.

I will write about Hybrid Tables in a future blog post.

When we successfully configure the incremental refresh policies in Power BI, we always have two ranges of data; the historical range and the incremental range. The historical range includes all data processed in the past, and the incremental range is the current range of data to process. Incremental refresh in Power BI always looks for data changes in the incremental range, not the historical range. Therefore, the incremental refresh will not notice any changes in the historical data. When we talk about the data changes, we are referring to new rows inserted, updated or deleted, however, the incremental refresh detects updated rows as deleting the rows and inserting new rows of data.

Benefits of Incremental Refresh

Configuring incremental refresh is beneficial for large tables with hundreds of millions of rows. The following are some benefits of configuring incremental refresh in Power BI:

  • The data refreshes much faster than when we truncate and load the data as the incremental refresh only refreshes the incremental range
  • The data refresh process is less resource-intensive than refreshing the entire data all the time
  • The data refresh is less expensive and more maintainable than the non-incremental refreshes over large tables
  • The incremental refresh is inevitable when dealing with massive datasets with billions of rows that do not fit into our data model in Power BI Desktop. Remember, Power BI uses in-memory data processing engine; therefore, it is improbable that our local machine can handle importing billions of rows of data into the memory

Now that we understand what incremental refresh is, let us see how it works in Power BI.

Implementing Incremental Refresh Policies with Power BI Desktop

We currently can configure incremental refresh in the Power BI Desktop and in Dataflows contained in a Premium Workspace. In this blog post, we look at the incremental refresh implementation within the Power BI Desktop.

After we successfully implement the incremental refresh policies with the desktop, we publish the model to Power BI Service. The first data refresh takes longer as we transfer all data from the data source(s) to Power BI Service for the first time. After the first load, all future data refreshes will be incremental.

How to Implement Incremental Refresh

Implementing incremental refresh in Power BI is simple. There are two generic parts of the implementation:

  1. Preparing some prerequisites in Power Query and defining incremental policies in the data model
  2. Publishing the model to Power BI Service and refreshing the dataset

Let’s briefly get to some more details to quickly understand how the implementation works.

  • Preparing Prerequisites in Power Query
    • We require to define two parameters with DateTime data type in Power Query Editor. The names for the two parameters are RangeStart and RangeEnd, which are reserved for defining incremental refresh policies. As you know, Power Query is case sensitive, so the names of the parameters must be RangeStart and RangeEnd.
    • The next step is to filter the table by a DateTime column using the RangeStart and RangeEnd parameters when the value of the DateTime column is between RangeStart and RangeEnd.

Notes

  • The data type of the parameters must be DateTime
  • The datat tpe of the column we use for incremental refresh must be Int64 (integer) Date or DateTime.Therefore, for scenarios that our table has a smart date key instead of Date or DateTime, we have to convert the RangeStart and RangeEnd parameters to Int64
  • When we filter a table using the RangeStart and RangeEnd parameters, Power BI uses the filter on the DateTime column for creating partitions on the table. So it is important to pay attention to the DateTime ranges when filtering the values so that only one filter condition must have an “equal to” on RangeStart or RangeEnd, not both
Continue reading “Incremental Refresh in Power BI, Part 1: Implementation in Power BI Desktop”

The Story of my Book, “Expert Data Modeling with Power BI”

Expert Data Modeling with Power BI
Expert Data Modeling with Power BI

In 2020, the world celebrated the new year with many uncertainties. Well, life is full of uncertainties, but, this one was very different. The world was facing a new pandemic that never experienced before. The first COVID19 case in New Zealand was confirmed in February 2020. In March 2020 the entire country went to lockdown for the first time. The world was experiencing a massive threat changing everyone’s lives. I was no different. Every day was starting with bad news. A relative passed away; a friend got the virus; the customers put the projects on hold etc. Nothing was looking normal anymore. You can’t even go to get a proper haircut, because everyone is in lockdown. This is me trying to smile after getting a homemade haircut. I bet many of you have done the same thing.

Soheil's Homemade Haircut
Soheil’s Homemade Haircut

One day, I checked my email and saw a message from Packt Publishing. They wanted to see if I am interested in writing a book about Power BI. That was a piece of good news after a long time. I always wanted to write a book about Power BI. Indeed, I attempted for the first time in 2016, but I couldn’t manage to get my ducks in a row to grasp the publishers’ attention.

I was not unfamiliar with writing books; indeed, I wrote my first book back in 2006 about Multimedia Applications in Persian. One of my passions in life is listening to music. And CDs were the most accessible music source with high-quality sound. I recall I saved money for some months, and I bought a Discman to listen to the music on the go. But CDs are rather bulky, and you could not have many of them in your pocket. So the next project was to save even more money to buy an MP3 player. But, converting Audio CDs to MP3 without compromising a lot on the sound quality was a real challenge for many people. And, that was my motive to write my first book in Persian to share my little knowledge with everyone. 

Continue reading “The Story of my Book, “Expert Data Modeling with Power BI””

Integrating and Visualising Multiple Microsoft To Do Accounts with Power BI

Integrating and Visualising Multiple Microsoft To Do Accounts with Power BI

It’s been a while that I use Microsoft To Do to organise my daily tasks. From work-related tasks to buy groceries. While Microsoft To Do is super easy to use but there are some challenges in using it more efficiently, especially when you have multiple O365 accounts within different organisations. Here are some of the challenges I faced; you may face other challenges too:

  • The Microsoft To Do app for Windows devices is very user friendly with amazingly good features like the ability to add multiple To Do accounts. However, we currently have to select which account we would like to use and the app shows all our tasks within that specific account. This means we can not see all our tasks from all our accounts in a single place.
  • The Microsoft To Do app for iOS devices is also very handy to use, but it lacks adding multiple accounts. Hence we cannot see all our tasks from multiple O365 accounts on the app. 🙁
  • We can use the Tasks within the Microsoft Outlook desktop application (I used the Windows version) which is by far the most comprehensive one with tons of features. While we can see tasks from multiple accounts in a single place, it is a real challenge if I want to know which task is assigned to which account. Besides, it is really hard to answer some questions like, how many high-priority tasks I have for today or the week ahead. I know, we can group tasks, but, it is still not so intuitive.

For the above reasons, I searched for a product that can do all the above at once. After spending some hours, I thought, well, I have to do it myself.

With that, let’s go ahead and see how we can get the job done in Power BI.

Note:

This method is not working for Microsoft To Do using personal accounts such as Outlook, Hotmail or MSN. If anyone knows how to add those, please let us know in the comments section below this post.

This is a long post that took me a reasonable amount of time to write. So I added the following table of contents so you can quickly jump to a subject of your interest.

Table of Contents

How It Works

Microsoft Power BI is NOT a reporting tool only. We can connect to many data sources, mix and match the data, create data models and visualise the data. So it should be possible to connect to multiple To Do accounts, append the data, create a simple data model on top of that, and visualise the data to answer our questions or our customers’ questions. The Microsoft To Do data is accessible via the Microsoft Exchange Online connector available in Power BI. The rest depends on our requirements and what questions we would like to answer.

In my case, in which I am the end-user of the report, I would like to be able to know:

  • Today’s tasks: All tasks that their StartDate or DueDate is today or the Tasks without any StartDate and DueDate
    • Number of tasks
    • Number of important tasks
    • Tasks by mailbox
    • Tasks details
      • Task list
      • Task description
      • Status
      • Start date
      • Due date
      • A link to the task itself that I can update if I want to
  • All Tasks
    • All above plus
      • Number of open tasks
      • Number of completed tasks

You or your customer(s) might have different requirements, but once you understand how to get the To Do data from Microsoft Exchange Online and do some data explorations to find out what you are after, you’ll be good.

Continue reading “Integrating and Visualising Multiple Microsoft To Do Accounts with Power BI”