Thin Reports, What Are They, Why Should I Care and How Can I Create Them?

Thin Reports in Power BI

Shared Datasets have been around for quite a while now. In June 2019, Microsoft announced a new feature called Shared and Certified Datasets with the mindset of supporting enterprise-grade BI within the Power BI ecosystem. In essence, the shared dataset feature allows organisations to have a single source of truth across the organisation serving many reports.

A Thin Report is a report that connects to an existing dataset on Power BI Service using the Connect Live connectivity mode. So, we basically have multiple reports connected to a single dataset. Now that we know what a thin report is, let’s see why it is best practice to follow this approach.

Prior to the Shared and Certified Datasets announcement, we used to create separate reports in Power BI Desktop and publish those reports into Power BI Service. This approach had many disadvantages, such as:

  • Having many disparate islands of data instead of a single source of truth.
  • Consuming more storage on Power BI Service by having repetitive table across many datasets
  • Reducing collaboration between data modellers and report creators (contributors) as Power BI Desktop is not a multi-user application.
  • The reports were strictly connected to the underlying dataset so it is so hard, if not totally impossible, to decouple a report from a dataset and connect it to a different dataset. This was pretty restrictive for the developers to follow the Dev/Test/Prod approach.
  • If we had a fairly large report with many pages, say more than 20 pages, then again, it was almost impossible to break the report down into some smaller and more business-centric reports.
  • Putting too much load on the data sources connected to many disparate datasets. The situation gets even worst when we schedule multiple refreshes a day. In some cases the data refresh process put exclusive locks on the the source system that can potentially cause many issues down the road.
  • Having many datasets and reports made it harder and more expensive to maintain the solution.

In my previous blog, I explained the different components of a Business Intelligence solution and how they map to the Power BI ecosystem. In that post, I mentioned that the Power BI Service Datasets map to a Semantic Layer in a Business Intelligence solution. So, when we create a Power BI report with Power BI Desktop and publish the report to the Power BI Service, we create a semantic layer with a report connected to it altogether. By creating many disparate reports in Power BI Desktop and publishing them to the Power BI Service, we are indeed creating many semantic layers with many repeated tables on top of our data which does not make much sense.

On the other hand, having some shared datasets with many connected thin reports makes a lot of sense. This approach covers all the disadvantages of the previous development method; in addition, it decreases the confusion for report writers around the datasets they are connecting to, it helps with storage management in Power BI Service, and it is easier to comply with security and privacy concerns.

Continue reading “Thin Reports, What Are They, Why Should I Care and How Can I Create Them?”

Business Intelligence Components and How They Relate to Power BI

Business Intelligence Components and How They Relate to Power BI

When I decided to write this blog post, I thought it would be a good idea to learn a bit about the history of Business Intelligence. I searched on the internet, and I found this page on Wikipedia. The term Business Intelligence as we know it today was coined by an IBM computer science researcher, Hans Peter Luhn, in 1958, who wrote a paper in the IBM Systems journal titled A Business Intelligence System as a specific process in data science. In the Objectives and principles section of his paper, Luhn defines the business as “a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera.” and an intelligence system as “the communication facility serving the conduct of a business (in the broad sense)”. Then he refers to Webster’s dictionary’s definition of the word Intelligence as the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal”.

It is fascinating to see how a fantastic idea in the past sets a concrete future that can help us have a better life. Isn’t it precisely what we do in our daily BI processes as Luhn described of a Business Intelligence System for the first time? How cool is that?

When we talk about the term BI today, we refer to a specific and scientific set of processes of transforming the raw data into valuable and understandable information for various business sectors (such as sales, inventory, law, etc…). These processes will help businesses to make data-driven decisions based on the existing hidden facts in the data.

Like everything else, the BI processes improved a lot during its life. I will try to make some sensible links between today’s BI Components and Power BI in this post.

Generic Components of Business Intelligence Solutions

Generally speaking, a BI solution contains various components and tools that may vary in different solutions depending on the business requirements, data culture and the organisation’s maturity in analytics. But the processes are very similar to the following:

  • We usually have multiple source systems with different technologies containing the raw data, such as SQL Server, Excel, JSON, Parquet files etc…
  • We integrate the raw data into a central repository to reduce the risk of making any interruptions to the source systems by constantly connecting to them. We usually load the data from the data sources into the central repository.
  • We transform the data to optimise it for reporting and analytical purposes, and we load it into another storage. We aim to keep the historical data in this storage.
  • We pre-aggregate the data into certain levels based on the business requirements and load the data into another storage. We usually do not keep the whole historical data in this storage; instead, we only keep the data required to be analysed or reported.
  • We create reports and dashboards to turn the data into useful information

With the above processes in mind, a BI solution consists of the following components:

  • Data Sources
  • Staging
  • Data Warehouse/Data Mart(s)
  • Extract, Transform and Load (ETL)
  • Semantic Layer
  • Data Visualisation

Data Sources

One of the main goals of running a BI project is to enable organisations to make data-driven decisions. An organisation might have multiple departments using various tools to collect the relevant data every day, such as sales, inventory, marketing, finance, health and safety etc.

The data generated by the business tools are stored somewhere using different technologies. A sales system might store the data in an Oracle database, while the finance system stores the data in a SQL Server database in the cloud. The finance team also generate some data stored in Excel files.

The data generated by different systems are the source for a BI solution.

Staging

We usually have multiple data sources contributing to the data analysis in real-world scenarios. To be able to analyse all the data sources, we require a mechanism to load the data into a central repository. The main reason for that is the business tools required to constantly store data in the underlying storage. Therefore, frequent connections to the source systems can put our production systems at risk of being unresponsive or performing poorly. The central repository where we store the data from various data sources is called Staging. We usually store the data in the staging with no or minor changes compared to the data in the data sources. Therefore, the quality of the data stored in the staging is usually low and requires cleansing in the subsequent phases of the data journey. In many BI solutions, we use Staging as a temporary environment, so we delete the Staging data regularly after it is successfully transferred to the next stage, the data warehouse or data marts.

If we want to indicate the data quality with colours, it is fair to say the data quality in staging is Bronze.

Data Warehouse/Data Mart(s)

As mentioned before, the data in the staging is not in its best shape and format. Multiple data sources disparately generate the data. So, analysing the data and creating reports on top of the data in staging would be challenging, time-consuming and expensive. So we require to find out the links between the data sources, cleanse, reshape and transform the data and make it more optimised for data analysis and reporting activities. We store the current and historical data in a data warehouse. So it is pretty normal to have hundreds of millions or even billions of rows of data over a long period. Depending on the overall architecture, the data warehouse might contain encapsulated business-specific data in a data mart or a collection of data marts. In data warehousing, we use different modelling approaches such as Star Schema. As mentioned earlier, one of the primary purposes of having a data warehouse is to keep the history of the data. This is a massive benefit of having a data warehouse, but this strength comes with a cost. As the volume of the data in the data warehouse grows, it makes it more expensive to analyse the data. The data quality in the data warehouse or data marts is Silver.

Extract, Transfrom and Load (ETL)

In the previous sections, we mentioned that we integrate the data from the data sources in the staging area, then we cleanse, reshape and transform the data and load it into a data warehouse. To do so, we follow a process called Extract, Transform and Load or, in short, ETL. As you can imagine, the ETL processes are usually pretty complex and expensive, but they are an essential part of every BI solution.

Continue reading “Business Intelligence Components and How They Relate to Power BI”

Power BI 101, What Should I Learn?

This is the second part of my new series of Power BI posts named Power BI 101. In the previous post, I briefly discussed what Power BI is. In this post, I look into one of the most confusing parts for those who want to start learning Power BI. Many people jump straight online and look for Power BI training courses which there are plenty out there. But which one is the right training course for you? Let’s find out.

What do you want to gain from learning Power BI?

Regardless of attending paid training courses or being a self-learner, the above question is one of the most important questions you might ask yourself before going to the next steps. The answer to this question dictates the sort of training you must look for. Your answer to the preceding question can be one or none of the following:

  • I am a graduate/student looking at the job market
  • I am a business analyst and I want to know how Power BI can help you with my daily job
  • I am a database developer and I want to learn more about business intelligence and data and analytics space
  • I am a non-Microsoft Business Intelligence developer and I want to start learning more about Microsoft offerings
  • I am a system admin and I have to manage our Power BI tenant
  • I am a data scientist and I want to know how I can use Power BI
  • I am just ciourious to see what Power BI can do for me

As mentioned, your answer might not be any of the above, but, thinking about your reason(s) for learning Power BI can help you to find the best way to learn and use Power BI more efficiently. You can spend time and money taking some online courses and get even more confused. You don’t want that do you?

So, whatever reason(s) you have in mind to learn Power BI, most probably you fall into one of the following user categories:

Think about your goal(s) and what you want to achieve by learning Power BI then try to identify your user category. For instance, if you are a student thinking of joining an IT company as a data and analytics developer, then your user category is most probably a Power BI Developer or a Contributor.

To help you find out your user category let’s see what the above user categories mean.

Power BI Developers

The Power BI Developers are the beating hearts of any Power BI development project. Regardless of the project you will be involved with, you definitely require to have a certain level of knowledge of the following:

  • Data preparation/ETL processes
  • Data warehousing
  • Data modelling/Star schema
  • Data visualisation

To be a successful Power BI developer you must learn the following languages in Power BI:

  • Power Query
  • DAX

Depending on the types of projects you will be involved in, you may require to learn the following languages as well:

  • Microsoft Visual Basic (for Paginated Reports)
  • Python
  • R
  • T-SQL
  • PL/SQL

As a Power BI developer, you will write a lot of Power Query and DAX expressions. Most probably you require to learn T-SQL as well. The following resources can be pretty helpful:

Continue reading “Power BI 101, What Should I Learn?”

Quick Tips: Enabling Modern Visual Styles on Old Reports in Power BI Desktop

Quick Tips: Enabling Modern Visual Styles on Old Reports in Power BI Desktop

Do you have an old report created in 2016? If you do, then you know that the visuals’ style was pretty different back then. If you don’t like to see all visuals in their old style where the headings overlap the other visuals and you don’t know how to switch the visuals to their modern style, then continue reading. The following screenshot is from a very old report I built for a Power BI demo in Feb 2016! Time flies mate!

Old Style Visuals in Power BI Desktop
Old Style Visuals in Power BI Desktop

If you are an old Power BI developer, you can clearly remember those days when we had many side-by-side slicers on the report canvas and sometimes it was hard to click a slicer because hovering over the visuals activates the visual frames and it is easy to mistakenly click a different visual.

The other issue with the old visual style is that the Visual Header options of the Visualisations pane from the Format tab are not available for traditional visuals.

Visual header options are not available for old style visual
Visual header options are not available for old-style visual

I see some developers don’t like the old visual style. It is interesting that when we work on an old report, even if we open the report in the latest version of Power BI Desktop, the visuals’ style remains the same and it doesn’t change if we add new visuals to a new page. Some developers go through a lot of pain to get the visuals to work in the modern style. I even know some developers copied all queries from the old file’s Power Query and pasted them to a new Power BI file. Then they used Tabular Editor to copy the tabular objects from the old report to the new one. And some even started to build everything from scratch. If you are one of those developers, be aware that there is a simple setting that can help to quickly switch all the old visuals to their new modern style.

Continue reading “Quick Tips: Enabling Modern Visual Styles on Old Reports in Power BI Desktop”