Microsoft Fabric: A SaaS Analytics Platform for the Era of AI

Microsoft Fabric

Microsoft Fabric is a new and unified analytics platform in the cloud that integrates various data and analytics services, such as Azure Data Factory, Azure Synapse Analytics, and Power BI, into a single product that covers everything from data movement to data science, real-time analytics, and business intelligence. Microsoft Fabric is built upon the well-known Power BI platform, which provides industry-leading visualization and AI-driven analytics that enable business analysts and users to gain insights from data.

Basic concepts

On May 23rd 2023, Microsoft announced a new product called Microsoft Fabric at the Microsoft Build conference. Microsoft Fabric is a SaaS Analytics Platform that covers end-to-end business requirements. As mentioned earlier, it is built upon the Power BI platform and extends the capabilities of Azure Synapse Analytics to all analytics workloads. This means that Microfot Fabric is an enterprise-grade analytics platform. But wait, let’s see what the SaaS Analytics Platform means.

What is an analytics platform?

An analytics platform is a comprehensive software solution designed to facilitate data analysis to enable organisations to derive meaningful insights from their data. It typically combines various tools, technologies, and frameworks to streamline the entire analytics lifecycle, from data ingestion and processing to visualisation and reporting. Here are some key characteristics you would expect to find in an analytics platform:

  1. Data Integration: The platform should support integrating data from multiple sources, such as databases, data warehouses, APIs, and streaming platforms. It should provide capabilities for data ingestion, extraction, transformation, and loading (ETL) to ensure a smooth flow of data into the analytics ecosystem.
  2. Data Storage and Management: An analytics platform needs to have a robust and scalable data storage infrastructure. This could include data lakes, data warehouses, or a combination of both. It should also support data governance practices, including data quality management, metadata management, and data security.
  3. Data Processing and Transformation: The platform should offer tools and frameworks for processing and transforming raw data into a usable format. This may involve data cleaning, denormalisation, enrichment, aggregation, or advanced analytics on large data volumes, including streaming IOT (Internet of Things) data. Handling large volumes of data efficiently is crucial for performance and scalability.
  4. Analytics and Visualisation: A core aspect of an analytics platform is its ability to perform advanced analytics on the data. This includes providing a wide range of analytical capabilities, such as descriptive, diagnostic, predictive, and prescriptive analytics with ML (Machine Learning) and AI (Artificial Intelligence) algorithms. Additionally, the platform should offer interactive visualisation tools to present insights in a clear and intuitive manner, enabling users to explore data and generate reports easily.
  5. Scalability and Performance: Analytics platforms need to be scalable to handle increasing volumes of data and user demands. They should have the ability to scale horizontally or vertically. High-performance processing engines and optimised algorithms are essential to ensure efficient data processing and analysis.
  6. Collaboration and Sharing: An analytics platform should facilitate collaboration among data analysts, data scientists, and business users. It should provide features for sharing data assets, analytics models, and insights across teams. Collaboration features may include data annotations, commenting, sharing dashboards, and collaborative workflows.
  7. Data Security and Governance: As data privacy and compliance become increasingly important, an analytics platform must have robust security measures in place. This includes access controls, encryption, auditing, and compliance with relevant regulations such as GDPR or HIPAA. Data governance features, such as data lineage, data cataloging, and policy enforcement, are also crucial for maintaining data integrity and compliance.
  8. Flexibility and Extensibility: An ideal analytics platform should be flexible and extensible to accommodate evolving business needs and technological advancements. It should support integration with third-party tools, frameworks, and libraries to leverage additional functionality.
  9. Ease of Use: Usability plays a significant role in an analytics platform’s adoption and effectiveness. It should have an intuitive user interface and provide user-friendly tools for data exploration, analysis, and visualisation. Self-service capabilities empower business users to access and analyse data without heavy reliance on IT or data specialists.
    These characteristics collectively enable organisations to harness the power of data and make data-driven decisions. An effective analytics platform helps unlock insights, identify patterns, discover trends, and drive innovation across various domains and industries.
Continue reading “Microsoft Fabric: A SaaS Analytics Platform for the Era of AI”

Slowly Changing Dimension (SCD) in Power BI, Part 1, Introduction to SCD

Slowly changing dimension (SCD) is a data warehousing concept coined by the amazing Ralph Kimball. The SCD concept deals with moving a specific set of data from one state to another. Imagine a human resources (HR) system having an Employee table. As the following image shows, Stephen Jiang is a Sales Manager having ten sales representatives in his team:

SCD in Power BI, Stephen Jiang is the sales manager of a team of 10 sales representatives
Image 1: Stephen Jiang is the sales manager of a team of 10 sales representatives

Today, Stephen Jiang got his promotion to the Vice President of Sales role, so his team has grown in size from 10 to 17. Stephen is the same person, but his role is now changed, as shown in the following image:

SCD in Power BI, Stephen's team after he was promoted to Vice President of Sales
Image 2: Stephen’s team after he was promoted to Vice President of Sales

Another example is when a customer’s address changes in a sales system. Again, the customer is the same, but their address is now different. From a data warehousing standpoint, we have different options to deal with the data depending on the business requirements, leading us to different types of SDCs. It is crucial to note that the data changes in the transactional source systems (in our examples, the HR system or a sales system). We move and transform the data from the transactional systems via ETL (Extract, Transform, and Load) processes and land it in a data warehouse, where the SCD concept kicks in. SCD is about how changes in the source systems reflect the data in the data warehouse. These kinds of changes in the source system do not happen very often hence the term slowly changing. Many SCD types have been developed over the years, which is out of the scope of this post, but for your reference, we cover the first three types as follows.

SCD type zero (SCD 0)

With this type of SCD, we ignore all changes in a dimension. So, when a person’s residential address changes in the source system (an HR system, in our example), we do not change the landing dimension in our data warehouse. In other words, we ignore the changes within the data source. SCD 0 is also referred to as fixed dimensions.

Continue reading “Slowly Changing Dimension (SCD) in Power BI, Part 1, Introduction to SCD”

Business Intelligence Components and How They Relate to Power BI

Business Intelligence Components and How They Relate to Power BI

When I decided to write this blog post, I thought it would be a good idea to learn a bit about the history of Business Intelligence. I searched on the internet, and I found this page on Wikipedia. The term Business Intelligence as we know it today was coined by an IBM computer science researcher, Hans Peter Luhn, in 1958, who wrote a paper in the IBM Systems journal titled A Business Intelligence System as a specific process in data science. In the Objectives and principles section of his paper, Luhn defines the business as “a collection of activities carried on for whatever purpose, be it science, technology, commerce, industry, law, government, defense, et cetera.” and an intelligence system as “the communication facility serving the conduct of a business (in the broad sense)”. Then he refers to Webster’s dictionary’s definition of the word Intelligence as the ability to apprehend the interrelationships of presented facts in such a way as to guide action towards a desired goal”.

It is fascinating to see how a fantastic idea in the past sets a concrete future that can help us have a better life. Isn’t it precisely what we do in our daily BI processes as Luhn described of a Business Intelligence System for the first time? How cool is that?

When we talk about the term BI today, we refer to a specific and scientific set of processes of transforming the raw data into valuable and understandable information for various business sectors (such as sales, inventory, law, etc…). These processes will help businesses to make data-driven decisions based on the existing hidden facts in the data.

Like everything else, the BI processes improved a lot during its life. I will try to make some sensible links between today’s BI Components and Power BI in this post.

Generic Components of Business Intelligence Solutions

Generally speaking, a BI solution contains various components and tools that may vary in different solutions depending on the business requirements, data culture and the organisation’s maturity in analytics. But the processes are very similar to the following:

  • We usually have multiple source systems with different technologies containing the raw data, such as SQL Server, Excel, JSON, Parquet files etc…
  • We integrate the raw data into a central repository to reduce the risk of making any interruptions to the source systems by constantly connecting to them. We usually load the data from the data sources into the central repository.
  • We transform the data to optimise it for reporting and analytical purposes, and we load it into another storage. We aim to keep the historical data in this storage.
  • We pre-aggregate the data into certain levels based on the business requirements and load the data into another storage. We usually do not keep the whole historical data in this storage; instead, we only keep the data required to be analysed or reported.
  • We create reports and dashboards to turn the data into useful information

With the above processes in mind, a BI solution consists of the following components:

  • Data Sources
  • Staging
  • Data Warehouse/Data Mart(s)
  • Extract, Transform and Load (ETL)
  • Semantic Layer
  • Data Visualisation

Data Sources

One of the main goals of running a BI project is to enable organisations to make data-driven decisions. An organisation might have multiple departments using various tools to collect the relevant data every day, such as sales, inventory, marketing, finance, health and safety etc.

The data generated by the business tools are stored somewhere using different technologies. A sales system might store the data in an Oracle database, while the finance system stores the data in a SQL Server database in the cloud. The finance team also generate some data stored in Excel files.

The data generated by different systems are the source for a BI solution.

Staging

We usually have multiple data sources contributing to the data analysis in real-world scenarios. To be able to analyse all the data sources, we require a mechanism to load the data into a central repository. The main reason for that is the business tools required to constantly store data in the underlying storage. Therefore, frequent connections to the source systems can put our production systems at risk of being unresponsive or performing poorly. The central repository where we store the data from various data sources is called Staging. We usually store the data in the staging with no or minor changes compared to the data in the data sources. Therefore, the quality of the data stored in the staging is usually low and requires cleansing in the subsequent phases of the data journey. In many BI solutions, we use Staging as a temporary environment, so we delete the Staging data regularly after it is successfully transferred to the next stage, the data warehouse or data marts.

If we want to indicate the data quality with colours, it is fair to say the data quality in staging is Bronze.

Data Warehouse/Data Mart(s)

As mentioned before, the data in the staging is not in its best shape and format. Multiple data sources disparately generate the data. So, analysing the data and creating reports on top of the data in staging would be challenging, time-consuming and expensive. So we require to find out the links between the data sources, cleanse, reshape and transform the data and make it more optimised for data analysis and reporting activities. We store the current and historical data in a data warehouse. So it is pretty normal to have hundreds of millions or even billions of rows of data over a long period. Depending on the overall architecture, the data warehouse might contain encapsulated business-specific data in a data mart or a collection of data marts. In data warehousing, we use different modelling approaches such as Star Schema. As mentioned earlier, one of the primary purposes of having a data warehouse is to keep the history of the data. This is a massive benefit of having a data warehouse, but this strength comes with a cost. As the volume of the data in the data warehouse grows, it makes it more expensive to analyse the data. The data quality in the data warehouse or data marts is Silver.

Extract, Transfrom and Load (ETL)

In the previous sections, we mentioned that we integrate the data from the data sources in the staging area, then we cleanse, reshape and transform the data and load it into a data warehouse. To do so, we follow a process called Extract, Transform and Load or, in short, ETL. As you can imagine, the ETL processes are usually pretty complex and expensive, but they are an essential part of every BI solution.

Continue reading “Business Intelligence Components and How They Relate to Power BI”

Quick Tips: Time Dimension with Time Bands at Seconds Granularity in Power BI and SSAS Tabular

Time Dimension with Time Bands at Seconds Granularity in Power BI and SSAS Tabular

I wrote some other posts on this topic in the past, you can find them here and here. In the first post I explain how to create “Time” dimension with time bands at minutes granularity. Then one of my customers required the “Time” dimension at seconds granularity which encouraged me to write the second blogpost. In the second blogpost though I didn’t do time bands, so here I am, writing the third post which is a variation of the second post supporting time bands of 5 min, 15 min, 30 min, 45 min and 60 min while the grain of the “Time” dimension is down to second. in this quick post I jump directly to the point and show you how to generate the “Time” dimension in three different ways, using T-SQL in SQL Server, using Power Query (M) and DAX. Here it is then:

Time Dimension at Second Grain with Power Query (M) Supporting Time Bands:

Copy/paste the code below in Query Editor’s Advanced Editor to generate Time dimension in Power Query:

let
Source = Table.FromList({1..86400}, Splitter.SplitByNothing()),
#"Renamed Columns" = Table.RenameColumns(Source,{{"Column1", "ID"}}),
#"Time Column Added" = Table.AddColumn(#"Renamed Columns", "Time", each Time.From(#datetime(1970,1,1,0,0,0) + #duration(0,0,0,[ID])), Time.Type),
    #"Hour Added" = Table.AddColumn(#"Time Column Added", "Hour", each Time.Hour([Time]), Int64.Type),
    #"Minute Added" = Table.AddColumn(#"Hour Added", "Minute", each Time.Minute([Time]), Int64.Type),
    #"5 Min Band Added" = Table.AddColumn(#"Minute Added", "5 Min Band", each Time.From(#datetime(1970,1,1,Time.Hour([Time]),0,0) + #duration(0, 0, (Number.RoundDown(Time.Minute([Time])/5) * 5) + 5, 0)), Time.Type),
    #"15 Min Band Added" = Table.AddColumn(#"5 Min Band Added", "15 Min Band", each Time.From(#datetime(1970,1,1,Time.Hour([Time]),0,0) + #duration(0, 0, (Number.RoundDown(Time.Minute([Time])/15) * 15) + 15, 0)), Time.Type),
#"30 Min Band Added" = Table.AddColumn(#"15 Min Band Added", "30 Min Band", each Time.From(#datetime(1970,1,1,Time.Hour([Time]),0,0) + #duration(0, 0, (Number.RoundDown(Time.Minute([Time])/30) * 30) + 30, 0)), Time.Type),
#"45 Min Band Added" = Table.AddColumn(#"30 Min Band Added", "45 Min Band", each Time.From(#datetime(1970,1,1,Time.Hour([Time]),0,0) + #duration(0, 0, (Number.RoundDown(Time.Minute([Time])/45) * 45) + 45, 0)), Time.Type),
#"60 Min Band Added" = Table.AddColumn(#"45 Min Band Added", "60 Min Band", each Time.From(#datetime(1970,1,1,Time.Hour([Time]),0,0) + #duration(0, 0, (Number.RoundDown(Time.Minute([Time])/60) * 60) + 60, 0)), Time.Type),
    #"Removed Other Columns" = Table.SelectColumns(#"60 Min Band Added",{"Time", "Hour", "Minute", "5 Min Band", "15 Min Band", "30 Min Band", "45 Min Band", "60 Min Band"})
in
    #"Removed Other Columns"
Continue reading “Quick Tips: Time Dimension with Time Bands at Seconds Granularity in Power BI and SSAS Tabular”