Integrating Power BI with AzureDevOps (Git), part 1: Cloud Integration


Power BI is a powerful tool for creating and sharing interactive data visualizations. But how can you collaborate with other developers on your Power BI projects and ensure quality and consistency across your reports? In this series of blog posts, I will show you how to integrate Power BI with Azure DevOps, a cloud-based software development and delivery platform. We can integrate Azure DevOps with Power BI Service (Fabric) as well as Power BI Desktop.
The current post explains how to set up Azure DevOps and connect a Power BI Workspace.
The next blog post will explain how to use it on your local machine to integrate your Power BI Desktop projects with Azure DevOps.

A brief history of source control systems

Before we dive into the details of Power BI and Azure DevOps integration, let’s take a moment to understand what source control systems are and why they are essential for any software project.

Source control systems, also known as version control systems or revision control systems, are tools that help developers manage the changes made to their code over time. They allow developers to track, compare, and roll back changes when necessary and collaborate with other developers on the same project.

There are two main types of source control systems: centralised and distributed. Centralised source control systems use Client-server approach to store all the code and its history in a single server, and developers need to connect to that server to access or modify the code. Examples of centralised source control systems are Microsoft’s Team Foundation Server (TFS) which rebranded to Azure DevOps Server in 2018, IBM’s ClearCase, and Apache’s Subversion.

On the other hand, distributed source control systems use a peer-to-peer approach, allowing each developer to have a local copy of the entire code repository, including its history. Developers can work offline and sync their changes with other developers through a remote server. Examples of distributed source control systems are Git Software and Mercurial, which takes us to the next section. Let’s see what Git is.

What is Git, and why use it?

Git is one of the world’s most popular and widely used distributed source control systems. It was created by Linus Torvalds, the creator of Linux, in 2005. Git has many advantages over centralised source control systems, such as:

  • Speed: Git is fast and efficient, performing most operations locally without network access.
  • Scalability: Git can easily handle large and complex projects, as it does not depend on a single server.
  • Flexibility: Git supports various workflows and branching strategies, allowing developers to choose how they want to organise their code and collaborate with others.
  • Security: Git uses cryptographic hashes to ensure the integrity and authenticity of the code.
  • Open-source: Git is free and open-source, meaning anyone can use it, modify it, or contribute to it.

While Git is pretty good, it has some disadvantages compared with a centralised source control system. Here are some:

  • Complexity: Git has a steep learning curve, especially for users who are new to distributed version control systems. Understanding concepts such as branching, merging, rebasing, and resolving conflicts can be challenging for beginners and sometimes even seasoned Git users.
  • Collaboration challenges: While distributed version control systems like Git enable easy collaboration, they can also lead to collaboration issues. Multiple developers working on the same branch simultaneously may encounter conflicts that need to be resolved, which can introduce complexities and require extra effort.
  • Performance with large repositories: While Git performs pretty well on most operations, it can get abortive when working with large repositories containing many files or a long history of commits. Operations such as cloning or checking out large repositories can be time-consuming.

What is Azure DevOps, and what does it relate to Git?

Azure DevOps is Microsoft’s cloud-based platform providing a set of tools and services for software development. It encompasses a range of capabilities for managing, planning, developing, testing, and delivering software applications. Azure DevOps offers:

  • Azure Boards: A tool for planning, tracking, and managing work items, such as user stories, tasks, bugs, etc.
  • Azure Repos: A tool for hosting Git repositories online, which is the main focus of this blog post.
  • Azure Pipelines: A tool for automating builds, tests, and deployments.
  • Azure Test Plans: A tool for creating and running manual and automated tests.
  • Azure Artifacts: A tool for managing packages and dependencies.

Azure DevOps also integrates with other tools and platforms, such as GitHub, Visual Studio Code, and now, Power BI. This takes us to the next section of this blog post, Integrating Power BI with Azure DevOps.

How to integrate Power BI with Azure DevOps

Now that we understand what Git and Azure DevOps are let’s see how we can integrate Power BI with Azure DevOps.

Integrating Power BI with Azure DevOps has two different integrations. Cloud integration and local machine integration have the following requirements.

Prerequisites

To follow along with this tutorial, you will need:

  • In the cloud:
    • An Azure DevOps Service
    • A Power BI account with one of the following licenses to enable Power BI Workspace integration with Azure DevOps.:
      • Power BI PPU (Premium Per User)
      • Premium Capacity
      • Embedded Capacity (EM/A)
      • Fabric Capacity
  • On your local machine:
    • The latest version of Power BI Desktop (June 2023 or later)
    • Either Visual Studio or VS Code

As stated earlier, this post explains the Cloud integration partTherefore, we require to have an Azure DevOps Service and a Power BI account with a Premium licencing plan in order to integrate Power BI with Azure DevOps.

In the following few sections, we look into more details and go through them together step-by-step.

Continue reading “Integrating Power BI with AzureDevOps (Git), part 1: Cloud Integration”

Microsoft Fabric: A SaaS Analytics Platform for the Era of AI

Microsoft Fabric

Microsoft Fabric is a new and unified analytics platform in the cloud that integrates various data and analytics services, such as Azure Data Factory, Azure Synapse Analytics, and Power BI, into a single product that covers everything from data movement to data science, real-time analytics, and business intelligence. Microsoft Fabric is built upon the well-known Power BI platform, which provides industry-leading visualization and AI-driven analytics that enable business analysts and users to gain insights from data.

Basic concepts

On May 23rd 2023, Microsoft announced a new product called Microsoft Fabric at the Microsoft Build conference. Microsoft Fabric is a SaaS Analytics Platform that covers end-to-end business requirements. As mentioned earlier, it is built upon the Power BI platform and extends the capabilities of Azure Synapse Analytics to all analytics workloads. This means that Microfot Fabric is an enterprise-grade analytics platform. But wait, let’s see what the SaaS Analytics Platform means.

What is an analytics platform?

An analytics platform is a comprehensive software solution designed to facilitate data analysis to enable organisations to derive meaningful insights from their data. It typically combines various tools, technologies, and frameworks to streamline the entire analytics lifecycle, from data ingestion and processing to visualisation and reporting. Here are some key characteristics you would expect to find in an analytics platform:

  1. Data Integration: The platform should support integrating data from multiple sources, such as databases, data warehouses, APIs, and streaming platforms. It should provide capabilities for data ingestion, extraction, transformation, and loading (ETL) to ensure a smooth flow of data into the analytics ecosystem.
  2. Data Storage and Management: An analytics platform needs to have a robust and scalable data storage infrastructure. This could include data lakes, data warehouses, or a combination of both. It should also support data governance practices, including data quality management, metadata management, and data security.
  3. Data Processing and Transformation: The platform should offer tools and frameworks for processing and transforming raw data into a usable format. This may involve data cleaning, denormalisation, enrichment, aggregation, or advanced analytics on large data volumes, including streaming IOT (Internet of Things) data. Handling large volumes of data efficiently is crucial for performance and scalability.
  4. Analytics and Visualisation: A core aspect of an analytics platform is its ability to perform advanced analytics on the data. This includes providing a wide range of analytical capabilities, such as descriptive, diagnostic, predictive, and prescriptive analytics with ML (Machine Learning) and AI (Artificial Intelligence) algorithms. Additionally, the platform should offer interactive visualisation tools to present insights in a clear and intuitive manner, enabling users to explore data and generate reports easily.
  5. Scalability and Performance: Analytics platforms need to be scalable to handle increasing volumes of data and user demands. They should have the ability to scale horizontally or vertically. High-performance processing engines and optimised algorithms are essential to ensure efficient data processing and analysis.
  6. Collaboration and Sharing: An analytics platform should facilitate collaboration among data analysts, data scientists, and business users. It should provide features for sharing data assets, analytics models, and insights across teams. Collaboration features may include data annotations, commenting, sharing dashboards, and collaborative workflows.
  7. Data Security and Governance: As data privacy and compliance become increasingly important, an analytics platform must have robust security measures in place. This includes access controls, encryption, auditing, and compliance with relevant regulations such as GDPR or HIPAA. Data governance features, such as data lineage, data cataloging, and policy enforcement, are also crucial for maintaining data integrity and compliance.
  8. Flexibility and Extensibility: An ideal analytics platform should be flexible and extensible to accommodate evolving business needs and technological advancements. It should support integration with third-party tools, frameworks, and libraries to leverage additional functionality.
  9. Ease of Use: Usability plays a significant role in an analytics platform’s adoption and effectiveness. It should have an intuitive user interface and provide user-friendly tools for data exploration, analysis, and visualisation. Self-service capabilities empower business users to access and analyse data without heavy reliance on IT or data specialists.
    These characteristics collectively enable organisations to harness the power of data and make data-driven decisions. An effective analytics platform helps unlock insights, identify patterns, discover trends, and drive innovation across various domains and industries.
Continue reading “Microsoft Fabric: A SaaS Analytics Platform for the Era of AI”

Slowly Changing Dimension (SCD) in Power BI, Part 1, Introduction to SCD

Slowly changing dimension (SCD) is a data warehousing concept coined by the amazing Ralph Kimball. The SCD concept deals with moving a specific set of data from one state to another. Imagine a human resources (HR) system having an Employee table. As the following image shows, Stephen Jiang is a Sales Manager having ten sales representatives in his team:

Image 1: Stephen Jiang is the sales manager of a team of 10 sales representatives

Today, Stephen Jiang got his promotion to the Vice President of Sales role, so his team has grown in size from 10 to 17. Stephen is the same person, but his role is now changed, as shown in the following image:

Image 2: Stephen’s team after he was promoted to Vice President of Sales

Another example is when a customer’s address changes in a sales system. Again, the customer is the same, but their address is now different. From a data warehousing standpoint, we have different options to deal with the data depending on the business requirements, leading us to different types of SDCs. It is crucial to note that the data changes in the transactional source systems (in our examples, the HR system or a sales system). We move and transform the data from the transactional systems via ETL (Extract, Transform, and Load) processes and land it in a data warehouse, where the SCD concept kicks in. SCD is about how changes in the source systems reflect the data in the data warehouse. These kinds of changes in the source system do not happen very often hence the term slowly changing. Many SCD types have been developed over the years, which is out of the scope of this post, but for your reference, we cover the first three types as follows.

SCD type zero (SCD 0)

With this type of SCD, we ignore all changes in a dimension. So, when a person’s residential address changes in the source system (an HR system, in our example), we do not change the landing dimension in our data warehouse. In other words, we ignore the changes within the data source. SCD 0 is also referred to as fixed dimensions.

Continue reading “Slowly Changing Dimension (SCD) in Power BI, Part 1, Introduction to SCD”

Thin Reports, Report Level Measures vs Data Model Measures

The previous post explained what Thin reports are, why we should care and how we can create them. This post focuses on a more specific topic, Report Level Measures. We discuss what report-level measures are, when and why we need them and how we create them.

If you are not sure what Thin Report means, I suggest you check out my previous blog post before reading this one.

What are report level measures?

Report level measures are the measures created by the report writers within a Thin Report. Hence, the report level measures are available within the hosting Thin Report only. In other words, the report level measures are locally available within the containing report only. These measures are not written back to the underlying dataset, hence not available to any other reports.

In contrast, the data model measures, are the measures created by data modellers and appear on the dataset level and are independent from the reports.

Why and when do we need report level measures?

It is a common situation in real-world scenarios when the business requires a report urgently, but the nuts and bolts of the report are not being created on the underlying dataset yet. For instance, the business requires to present a report to the board showing year-to-date sales analysis but the year-to-date sales measure hasn’t been created in the dataset yet. The business analyst approaches the Power BI developers to add the measure, but they are under the pump to deliver some other functionalities which adding a new measure is not even in their project delivery plan. It is perhaps too late if we wait for the developers to plan for creating the required measure, go through the release process, and make it available for us in the dataset. Here is when the report level measures come to the rescue. We can simply create the missing measure in the Thin Report itself, where we can later share it with the developers to implement it as a dataset measure.

Continue reading “Thin Reports, Report Level Measures vs Data Model Measures”