Technology is growing fast and we are enjoying it. We reshape our daily created data in a form that satisfies our needs. One of the technologies which is used more commonly these days is SQL Server Tabular Models, SSAS Tabular in short. Lots of industries decide to go with SSAS Tabular in their new projects and some defined new projects to slowly switch their existing SSAS Multidimensional to SSAS Tabular. I know, there is a big debate around SSAS Multidimensional vs. SSAS Tabular. But my aim is to prevent going through that sort of discussion. As the title implies, this post is about documentation that I believe is one of the most important parts of every project which is also sacrificed the most. In this post I explain how to document your SSAS Tabular model in Power BI Desktop and Excel. I know, there are some products you can find in the internet that can generate documentation in various formats like Word, PDF, HTML and so on. But, If you’re looking for a free and somehow more intuitive way of documenting your SSAS Tabular Models with Power BI then this article is for you. Through this article, we create a documentation tool with Power BI. I call it SSAS Tabular Model Documenter. In this method we don’t use DMVs at all. For those who are not familiar with DMVs I shortly explain what DMVs are, if you’re already familiar with DMVs you can jump this section.
You can download a copy of SSAS Tabular Model Documenter in Power BI template format (pbit) at the end of this post. It is a Christmas present for you.
What are DMVs?
Dynamic Management Views, DMVs in short, are queries that retrieve metadata information about an instance of SQL Server Analysis Services. DMVs work on both SSAS Multidimensional and SSAS Tabular server modes. DMVs can be used to monitor server operations and health. The DMV query structure is very similar to T-SQL, therefore you use “SELECT” statement followed by “$System” which is an XMLA schema rowset. The DMV queries look like below:
SELECT * FROM $System.<schemaRowset>
So you can open SQL Server Management Studio, connect to an instance of SSAS (Tabular model for the sake of this post) and run the following query to get lots of information about tables in your Tabular model:
select * from $SYSTEM.TMSCHEMA_TABLES
Note:This method only works with SSAS Tabular 2016 and above.
Document SSAS Tabular without DMVs
In SSAS Tabular 2016 and above there is a tiny metadata database that can be loaded in Power BI Desktop or Excel to document the corresponding SSAS Tabular model. The database is a SQLite database. In the previous post I explained how to visualise SQLite data in Power BI. The requirements for this post are the same as the previous post, so I encourage you to check it out. Therefore, I just explain how to find the metadata file and how to build a model in Power BI Desktop. I also explain how to do the same in Excel for those of you who would like to add some annotations or comments to the outcomes.
Where to Find SSAS Tabular Metadata Database?
Well, it depends on your SSAS Tabular instant configuration. If you stuck to the default settings you can find it in your database folder under “Data” folder. You have to look for “metadata.sqlitedb” file. For instance, I want to document my “AdventureWorks2016” Tabular Model and my SSAS Tabular instance configuration is the default configuration. So I can find the “metadata.sqlitedb” file here:
C:\Program Files\Microsoft SQL Server\MSAS14.SQL2017TABULAR\OLAP\Data\AdventureWorks2016.0.db
-
The highlighted part of the above path would be different for different instant names. My instance name is “SQL2017Tabular” which is most probably different than yours.
-
The “AdventureWorks2016.0.db” part is the database name that you’re willing to look at
Document SSAS Tabular in Power BI
As I mentioned earlier I previously explained how to load SQLite data in Power BI Desktop so I assume you already know how to load data from the “metadata.sqlitedb” file to Power BI Desktop.
Suggestion: I recommend you to take a copy of the “metadata.sqlitedb” file before getting the data in Power BI.
After you navigate the database in Power BI you can either select all tables or you can select just some tables that you need for the documentation. Personally I loaded the following tables at the first time:
“Model”
“Measure”
“KPI”
“Level”
“Hierarchy”
“Column”
“Partition”
“Perspective”
“PerspectiveColumn”
“PerspectiveHierarchy”
“PerspectiveMeasure”
“PerspectiveTable”
“Table”
Power BI automatically detects the relationships, but, they are all incorrect. So you have to delete the relationships and recreate them manually as below:
Relationship mapping is as below:
From Table | From Column | To Table | To Column |
Model | ID | Perspective | ModelID |
Model | ID | Table | ModelID |
Table | ID | Measure | TableID |
Table | ID | Hierarchy | TableID |
Table | ID | Column | TableID |
Table | ID | Partition | TableID |
Hierarchy | ID | Level | HierarchyID |
Perspective | ID | PerspectiveTable | PerspectiveID |
PerspectiveTable | ID | PerspectiveHierarchy | PerspectiveTableID |
PerspectiveTable | ID | PerspectiveColumn | PerspectiveTableID |
PerspectiveTable | ID | PerspectiveMeasure | PerspectiveTableID |
Measure | ID | KPI | MeasureID |
Now you can create some measures like “Number of Tables”, “Number of Perspectives”, “Number of Measures” and so forth and create charts and tables to create awesome report.
Look at the “Measure” table and you see that it contains DAX expressions used to define the measures. You can also see the partition queries in “Partition” table which is really awesome isn’t it?
So far we imported data from the “metadata.sqlitedb” file which is really cool. But we are not finished yet. We’ll need to do some data transformation in Query Designer. Before going to the next step, I renamed all columns called “Name” to more appropriate names like “Table Name” or “Perspective Name”.
Data Transformation in Power Query
If you look at the “PerspectiveTable” table you see something like following:
As you can see there is a “TableID” column. So if we want to see which tables are used in the perspectives then we need to have table names. We can handle this in various ways. I personally prefer to do these sort of things in Power Query. Especially if I need to add a new column, it’s the best to handle it in Power Query as opposed to DAX if possible.
What I’m going to do is to add a “Table Name” column to the “PerspectiveTable”. The “Table Name” column comes from “table” table. To do so we have to somehow lookup the “Table” table to get the “Name”. There is a very handy function in Power Query “Table.NestedJoin” which in the UI is called “Merge Queries”.
Open Query Editor to merge the “PerspectiveTable” query with “Table” on the “ID” column from “Table” and “TableID” from “PerspectiveTable”. The join type is “Inner Join”.
-
Click “PerspectiveTable” from the “Queries” pane
-
From top right click “Merge Queries”
-
In the “Merge” window, select “Table” from the dropdown list
-
Select “TableID” column from”PerspectiveTable” and “ID” column from “Table”
-
Set the “Join Kind” to “Inner Join” then click OK
You now have a new “Table” column.
Expand the column and select “Table Name” column from the list then click OK
“Table Name” column is now added to the table.
You may do the same in “PerspectiveColumn”, “PerspectiveMeasure” and “PerspectiveHierarchy” tables to add the corresponding column with the relevant object name.
Parameterise Data Source
Query parameters have been around for quite a long time. I wrote a series of blog posts about Query Parameters. It is wise to parameterise the data source for our model so that we can easily switch to a different data source.
-
In Query Editor click “Manage Parameters”
-
Click “New”
-
Enter a “Name” and “Description”
-
Tick “Required”
-
For our model we can leave “Type” and “Suggested Values” as is
-
Enter the “metadata.sqlitedb” file path in “Current Value” then click OK
Now you need to modify the queries to use the parameter
-
Click “Advanced Editor”
-
Modify the “Source” line as below
-
Click “Close & Apply” to load changes to the model
Defining Useful Measures
It is now time to define some simple and useful measures. A measure to show the number of hidden tables or hidden columns, number of defined KPIs, number of hierarchies defined in the SSAS Tabular Model and so forth. I put all measures in a Measure Table. Click here to learn more about Measure Tables.
Here are the measures I defined, you probably want to add some more.
Hidden Columns = CALCULATE(COUNTROWS('Column'), 'Column'[IsHidden]=1)
Hidden Tables = CALCULATE(COUNTROWS('Table'), 'Table'[IsHidden]=1)+0
Hierarchies = COUNTROWS('Hierarchy')
KPIs = COUNTROWS('KPI')
Measures = CALCULATE(COUNTROWS('Measure'), 'Measure'[KPIID]=BLANK())
Partitions = COUNTROWS(Partition)
Perspective KPIs = CALCULATE(COUNTROWS(PerspectiveMeasure), PerspectiveMeasure[KPIID]<>BLANK())
Perspective Measures = CALCULATE(COUNTROWS(PerspectiveMeasure), PerspectiveMeasure[KPIID]=BLANK())
Perspective Tables = COUNTROWS(PerspectiveTable)
Perspectives = COUNTROWS(Perspective)
Tables = COUNTROWS('Table')
Creating Reports
The final step is the most interesting one. Now we can see what is happening in our SSAS Tabular Model.
I built a report like below that contains two pages. One report page provides general information about the model and the second one shows more details about measures and KPIs including expressions and formatting.
Hmm. There are some drawbacks with the current solution listed below. You guys may have a remedy for it, if so please share it with us in the comment section below.
Numeric Date/Time
Looking at any of the tables loaded in Power BI you’ll find at lease one column that supposed to show relevant Date/Time. The column is either “ModifiedTime”, “StructureModifiedTime” or “RefreshedTime”. As a matter of fact those columns contain important date/time related information. But this is what you can see when you load data into Power BI:
I know, I know… They look like Unix Epoch timestamp but they’re not!
I converted the values to DateTime using both 1970 and 1900 as starting date and this is what I got which both are quite wrong:
Here are the Power Query (M) expressions I used:
1900 Unix Epoch to Date = #datetime(1900,1,1,0,0,0)+#duration(0,0,0,[ModifiedTime])
1970 Unix Epoch to Date = #datetime(1970,1,1,0,0,0)+#duration(0,0,0,[ModifiedTime])
As a matter of fact the values loaded in the model are wrong integer values! Let me explain. When you connect to a SQLite database via ODBC you have two option of loading all numeric values as Int32 or Int64. If you browse the SQLite metadata database you’ll see that the “ModifiedTime” and all other dates are BigInt not Int.
This is a screenshot of the “Table” table from “metadata.sqlitedb” file open in “DB Browser for SQLite”.
You may have already noticed that the values do not look like normal Unix Epoch that we can convert them to date/time. They’re the indeed Unix Epoch in Nanoseconds, yes! nanoseconds, since 1st Jan 1601!
To get the correct values in Power BI, we have to enable BigInt numbers in the ODBC connection string or DSN.
You can do that easily by adding “BigInt = True” in connection string or alternatively you can create a User DSN in ODBC and tick the corresponding option.
However, when we enable BigInt, all numeric values regardless of their original data types get converted to BigInt. It’s getting worst when you load that data in Power BI as Power BI considers all BigInt values as Binary. That means that you have to convert all Binary values to either Int or Int64 in Query Editor which is such a headache. But, it works after all!
Here is an screenshot of the correctly converted Epoch timestamp to Date values:
I’m not going to expand this any further as it makes this writing even longer.
If you have any better idea, I’m really curious to hear about it, so please leave your comments down below.
Note:If you run DMVs then you don’t get Unix Epoch values. You indeed get nice and tidy date/time values.
There is a “DataSource” table in the metadata file which I haven’t discussed above. The “DataSource” table contains the connection string to the source data that is used in the SSAS Tabular Model. The connection string is encrypted (Base64) and I couldn’t decrypt the value.
If you run DMVs you’ll get the decrypted value in ConnectionString column.
Once again, if you know how to do that, then please leave your comments in the comment section below this post.
I still use DMVs to get some useful information like the connection string and normal date/time values. However, there is a space for the tool to quickly go through the “metadata.sqlitedb” file and get lots of insights about your SSAS Tabular model.
I made the tool available for you to download for free.
This is my Christmas present for you.
Merry Christmas and see you in 2018.
Cheers!
Download SSAS Tabular Documenter in Power BI
Discover more from BI Insight
Subscribe to get the latest posts sent to your email.
For download your file required loging, is this correct?
Very interesting. As some DMVs like $SYSTEM.DISCOVER_CALC_DEPENDENCY suddenly do not work with Power BI files anymore, this might be an alternative.
How can I enable BigInt numbers in the ODBC connection string?
Hello Soheil!
Inspired by your article I tried on my own and I established connection to metadata.sqlite with DB Browser. SELECT queries to retrieve information about my tables work just fine. Also I tried INSERT INTO “myTable” data and also works perfecly fine, but for one little thing. It seems that executing INSERT INTO command more than once, the .pbix gets an internal error. Any idea why?