I previously wrote a blog post explaining how to rename all columns in a table in one go with Power Query. One of my visitors raised a question in the comments about the possibility to rename all columns from all tables in one go. Interestingly enough, one of my customers had a similar requirement. So I thought it is good to write a Quick Tip explaining how to meet the requirement.
The Problem
You are connecting to the data sources from Power BI Desktop (or Excel or Data Flows). The columns of the source tables are not user friendly, so you require to rename all columns. You already know how to rename all columns of a table in one go but you’d like to apply the renaming columns patterns to all tables.
The Solution
The solution is quite simple. We require to connect to the source, but we do not navigate to any tables straight away. In my case, my source table is an on-premises SQL Server. So I connect to the SQL Server instance using the Sql.Database(Server, DB) function in Power Query where the Server and the DBare query parameters. Read more about query parameters here. The results would like the following image:
The results of running the Sql.Database(Server, DB) function
As you see in the above image, the results include Tables, Views and Functions. We are not interested in Functions therefore we just filter them out. The following image shows the results after applying the filter:
Filtering out SQL Server Functions
If we look closer to the Data column, we see that the column is indeed a Structured Column. The structured values of the Data column are Table values. If we click on a cell (not on the Table value of the cell), we can see the actual underlying data, as shown in the following image:
I am involved with a Power BI development in the past few days. I got some data exported from various systems in different formats, including Excel, CSV and OData. The CSV files are data export dumps from an ERP system. Working with ERP systems can be very time consuming, especially when you don’t have access to the data model, and you get the data in raw format in CSV files. It is challenging, as in the ERP systems, the table names and column names are not user friendly at all, which makes sense. The ERP systems are being used in various environments for many different customers with different requirements. So if we can get our hands to the underlying data model, we see configuration tables keeping column names. Some of the columns are custom built to cover specific needs. The tables may have many columns that are not necessarily useful for analytical purposes. So it is quite critical to have a good understanding of the underlying entity model. Anyhow, I don’t want to go off-topic.
The Problem
So, here is my scenario. I received about 10 files, including 15 tables. Some tables are quite small, so I didn’t bother. But some of them are really wide like having between 150 to 208 columns. Nice!
Looking at the column names, they cannot be more difficult to read than they are, and I have multiple tables like that. So I have to rename those columns to something more readable, more on this side of the story later.
Background
I emailed back to my customer, asking for their help. Luckily they have a very nice data expert who also understands their ERP system as well as the underlying entity model. I emailed him all the current column names and asked if he can provide more user-friendly names. He replied me back with a mapping table in Excel. Here is an example to show the Column Names Mapping table:
Column Names Mapping
I was quite happy with the mapping table. Now, the next step is to rename all columns is based on the mapping table. Ouch! I have almost 800 columns to rename. That is literally a pain in the neck, and it doesn’t sound quite right to burn the project time to rename 800 columns.
But wait, what about writing automating the rename process? Like writing a custom function to rename all columns at once? I recall I read an excellent blog post about renaming multiple columns in Power Query that Gilbert Quevauvilliers wrote in 2018. I definitely recommend looking at his blog post. So I must do something similar to what Gilbert did; creating a custom function that gets the original columns names and brings back the new names. Then I use the custom function in each table to rename the columns. Easy!
It’s been a while that I am working with OData data source in Power BI. One challenge that I almost always do not have a good understanding of the underlying data model. It can be really hard and time consuming if there is no one in the business that understands the underlying data model. I know, we can use $metadata to get the metadata schema from the OData feed, but let’s not go there. I am not an OData expert but here is the thing for someone like me, I work with various data sources which I am not necessarily an expert in, but I need to understand what the entities are, how they are connected etc… then what if I do not have access any SMEs (Subject Matter Expert) who can help me with that?
So getting involved with more OData options, let’s get into it.
The custom function below accepts an OData URL then it discovers all tables, their column count, their row count (more on this later), number and list of related tables, number and list of columns of type text, type number and Decimal.Type.
// fnODataFeedAnalyser
(ODataFeed as text) =>
let
Source = OData.Feed(ODataFeed),
SourceToTable = Table.RenameColumns(
Table.DemoteHeaders(Table.FromValue(Source)),
{{"Column1", "Name"}, {"Column2", "Data"}}
),
FilterTables = Table.SelectRows(
SourceToTable,
each Type.Is(Value.Type([Data]), Table.Type) = true
),
SchemaAdded = Table.AddColumn(FilterTables, "Schema", each Table.Schema([Data])),
TableColumnCountAdded = Table.AddColumn(
SchemaAdded,
"Table Column Count",
each Table.ColumnCount([Data]),
Int64.Type
),
TableCountRowsAdded = Table.AddColumn(
TableColumnCountAdded,
"Table Row Count",
each Table.RowCount([Data]),
Int64.Type
),
NumberOfRelatedTablesAdded = Table.AddColumn(
TableCountRowsAdded,
"Number of Related Tables",
each List.Count(Table.ColumnsOfType([Data], {Table.Type}))
),
ListOfRelatedTables = Table.AddColumn(
NumberOfRelatedTablesAdded,
"List of Related Tables",
each
if [Number of Related Tables] = 0 then
null
else
Table.ColumnsOfType([Data], {Table.Type}),
List.Type
),
NumberOfTextColumnsAdded = Table.AddColumn(
ListOfRelatedTables,
"Number of Text Columns",
each List.Count(Table.SelectRows([Schema], each Text.Contains([Kind], "text"))[Name]),
Int64.Type
),
ListOfTextColunmsAdded = Table.AddColumn(
NumberOfTextColumnsAdded,
"List of Text Columns",
each
if [Number of Text Columns] = 0 then
null
else
Table.SelectRows([Schema], each Text.Contains([Kind], "text"))[Name]
),
NumberOfNumericColumnsAdded = Table.AddColumn(
ListOfTextColunmsAdded,
"Number of Numeric Columns",
each List.Count(Table.SelectRows([Schema], each Text.Contains([Kind], "number"))[Name]),
Int64.Type
),
ListOfNumericColunmsAdded = Table.AddColumn(
NumberOfNumericColumnsAdded,
"List of Numeric Columns",
each
if [Number of Numeric Columns] = 0 then
null
else
Table.SelectRows([Schema], each Text.Contains([Kind], "number"))[Name]
),
NumberOfDecimalColumnsAdded = Table.AddColumn(
ListOfNumericColunmsAdded,
"Number of Decimal Columns",
each List.Count(
Table.SelectRows([Schema], each Text.Contains([TypeName], "Decimal.Type"))[Name]
),
Int64.Type
),
ListOfDcimalColunmsAdded = Table.AddColumn(
NumberOfDecimalColumnsAdded,
"List of Decimal Columns",
each
if [Number of Decimal Columns] = 0 then
null
else
Table.SelectRows([Schema], each Text.Contains([TypeName], "Decimal.Type"))[Name]
),
#"Removed Other Columns" = Table.SelectColumns(
ListOfDcimalColunmsAdded,
{
"Name",
"Table Column Count",
"Table Row Count",
"Number of Related Tables",
"List of Related Tables",
"Number of Text Columns",
"List of Text Columns",
"Number of Numeric Columns",
"List of Numeric Columns",
"Number of Decimal Columns",
"List of Decimal Columns"
}
)
in
#"Removed Other Columns"
One of the most important aspects of the software development life cycle is to have control over different versions of a solution, especially in a project where there is more than one developer involved in the implementation. Just like when you normally create a project in visual studio and you commit the changes back to a source control system like GitHub or Azure DevOps, it’s advised to keep the history of different versions of your Power BI reports. What we expect from a source control solution is to keep tracking of all changes happening in the source code while developing a project. So you can easily roll back to a previous state if you like to.
The other benefit of having a source control process in place is when multiple developers are working on a single project. Every single one of them makes changes in the source code then they commit all the changes into the source control server without overwriting each others’ work.
With Power BI things are a bit different though. Power BI report files are PBIX files which are stored in binary format (well, PBIX is basically a zip file isn’t it?) which at the time of writing this post, there is no official way to enforce Power BI source control in any source control solutions like GitHub or Azure DevOps (YET).
Microsoft announced a fantastic feature last week (6/05/2020) named “Deployment Pipelines” which does exactly what we’re after, but it is currently a preview feature which is only available only to organisations with Power BI Premium. So it is out of the game for the majority of us.
Having said that, there is still a way to keep history of changes in the shape of different versions of PBIX files. This is called Version Control.
There are several ways you can enable version control over your PBIX files while developing the report. Regardless of the version control platform you need to think about having multiple environments and who can access them for doing what.
Environment
Accessible to
Description
Development
Developers
Data modellers and report writers access this environment for development purposes.
User Acceptance Test (UAT)
Developers, SMEs, Technical Leads, Power BI Admins
After the development is finished the developers deploy the solution to the UAT environment. The solution will then be tested by SMEs (Subject Matter Experts) to make sure the business requirements are met.
Pre-prod (Optional but recommended)
Technical Leads, Power BI Admins
After the solution passed all UAT testing scenarios Technical Leads or Power BI Admins will deploy it to Pre-prod for final checks to make sure all data sources are correctly pointing to production data sources and all reports and dashboards are working as expected.
Production
Technical Leads, Power BI Admins, End Users
After pre-prod checks completed Technical Leads or Power BI Admins deploy the solution to the Production environment which is then available to the end users.
Version Control Options
If your organisation does not have a Premium capacity then “Deployment Pipelines” feature is not available to you. So you need to come up with a solution though. In this section I name some Version Control options available to you