Good Data Models Lead to Great BI

When data is organized and defined to match business processes, it serves as the jet fuel to bring about actionable insight and a thriving self-service culture. Well-structured data models are the backbone that support rapid, accurate analytics. How strong are your data models?

Three common signs your data models may be letting you or your organization down:

  1. Excessively long report run times
  2. Difficulty finding the data you need from all sources
  3. Having to spend more time writing SQL code than doing analysis

In addition to addressing the importance of data modeling, we also discuss database basics and:

  • Why a data warehouse and data marts are still important, even today
  • Why what you call a data warehouse probably isn’t one really
  • The impact of the source data on your reporting
  • Our discussion will focus on data structures themselves and is independent of any front end tool. This webinar is for everyone, regardless what analytics platform you use.

Presenter

Andy Kinnier
Consultant
Senturus, Inc.

A certified Microsoft business intelligence architect and developer with over 20 years of experience in software development, Andy has made regular appearances at the Power Platform World Tour events over the years. He also serves as assistant organizer of the NJ/NY branch of the Power BI User Groups.

Machine transcript

Welcome to Senturus’s webinar series. Today’s Soup du Jour is good data models lead to great BI.

0:28
You’ve got questions, we’ve got answers.

0:30
Please feel free to use the GoToWebinar control panel to make this session interactive, we are usually able to respond to your questions while the Webinars is in progress. And if we don’t reply immediately, we’ll definitely cover it, either in the Q and A session section at the end, or via a written response document that we’ll post on the Senturus.com website.

0:54
The first question we usually get is, can I get a copy of this presentation, and unfortunately the answer is no.

1:01
Just kidding. You can absolutely get a copy. It is available on Senturus.com under the Resources tab, and then under the Knowledge Center and we just shared the link in the GoToWebinar control panel.

1:15
Today’s agenda, we’ll start off with some introductions.

1:23
We’ll talk about whether data modeling is still relevant.

1:26
Why is data modeling important? What is a data model? We’ll do some live demos. We’ll do a quick overview of Senturus and some additional resources that we have, and then we’ll end up with a Q and A on some of your questions and hopefully get answers to you.

1:46
Joining us today is Andy Kinnier.

1:48
Andy is a certified Microsoft Business Intelligence architect, and a developer with over 20 years of experience in software development.

1:56
Andy has made regular appearances at the Power Platform, World Tour event, over the years. He also serves as the Assistant Organizer of the New Jersey, New York branch of the Power BI User groups.

2:07
My name is Todd Schuman and I’ll be the host today.

2:09
I run the Installations Upgrades and Optimization Practices at Senturus. You may recognize me from other amazing webinars such as Cognos Analytics, Performance Tuning Tips, and tips for installing Cognos 11.

2:20
With that said, I’d like to turn it over to Andy, who will take you through today’s presentation.

2:26
Thanks, Todd appreciate it. Hello, everyone. What we’ll talk about today is whether or not data modeling is still relevant.

2:37
And let’s dive right in, New Tech replaced modeling, right? And this is kind of the big question out there of big data technologies, and taking stuff, and throw it into a data lake. And a lot of people are out there saying, oh, you need to store your data in a data lake and apply it to this third party tool, and they do analytics right away in and get the answers you need.

3:01
And that’s a lot to be said for that, with big data technologies.

3:04
And one of the reasons that it came about was that it was, too hard to say, load into a data warehouse and a data warehouse we’re going to talk a little bit more about that, but it is essentially taken a dimensional model.

3:16
And then, loading the dimensional model as a data warehousing process tone.

3:22
The compare contrast of what we’re going to talk about a little bit today is, is whether or not big data technologies have discarded the use for a good dimensional model.

3:32
And there are claims, that’s the case.

3:35
But in very recently, we’ve had a chance in the Big Data world, where they’ve actually created some technologies where you can do some of the things. Like, for instance, inserts, updates, and deletes on files that are in a data warehouse, excuse me, in a data lake, and then have the same type of analytics of that dimensional model.

3:56
So it’s interesting that for a while, when people are saying that the dimensional model is no longer needed, you don’t have to spend your time doing that.

4:03
But now since the technology is there to do it, they’ve come up with what they call a data lake house, which is one way of achieving both big data technologies and a dimensional model in a data lake environment.

4:17
So it’s an interesting topic at this point.

4:21
The lake house, the data lake house, what attempts to do? This, a data warehouse here that does? This, is a data lake that does this and this data lake house kind of combines. And then back, once again as our data model or dimensional model that we do, and it’s both in a data lake house or in a data warehouse piece.

4:39
These are big data technologies. There’s a high minded concepts from an architectural perspective in an organization.

4:46
How do we go about getting the data out to our people, so that they can make good business decisions? And whether or not the Data model process is the right way to go, or is it still even useful in today’s world?

5:03
You all know our conversion, and that’s what the prior slide had just shown.

5:09
You know, the big data processing is now heading towards dimensional modeling.

5:13
Data Lake House concept, data warehousing has had big data processing for a while.

5:19
They’ve noticed that, with very large companies, very large amount of data coming in, how do we load a data warehouse overnight? It takes 30 hours for me to load my data warehouse. We have so much data, so you couldn’t do that. Amazon couldn’t do that. Facebook couldn’t do that.

5:35
All these companies had tremendous amounts of data coming in, is, so that data warehousing concept has to be put on the side.

5:42
And big data concepts came into play.

5:44
So, they came up with techniques to the map reduce technique, which evolved into a number of tools, taking something similar to that so that you can scale out processing power, and memory, and then do analytics on very large amounts of data.

6:03
And with the data warehousing tools that were out there, they had, if you do have the ability to load a data warehouse overnight, or in a certain sequence of time, say, every hour or so, then we still have very large amounts of data.

6:18
And we need to be able to scale that out in this technology of massively parallel processing or MPP appliances that were created some 15 to 20 years ago, and the data warehousing world, and now are part of the cloud platforms out there.

6:35
So, data warehousing has evolved into big data, in terms of scaling out to multiple processes and memory.

6:47
So, why is data modeling important? What does it do, what does it serve us? And this is just a great question, from a business standpoint.

6:54
And I’ve had many conversations with people, especially on the big data side, good big data architects, who are very good at the process that handle it.

7:03
And they normally, this is, I should say. I’ve had many times where they just don’t understand the idea of why we wanted a data model or put the effort into building it, and there is effort! So there was upfront effort, that’s important. But.

7:20
It’s always important to show the end result of it, and that’s one of the things we’ll do in this.

7:25
In this webinar is going to show some demos at the end as to why a data model is still important and relevant, as we said before.

7:37
From a bullet points perspective, ease of use, It’s easy to understand.

7:41
So we’re used to seeing dimensions, and fact tables, that represent, either the nouns in our organization and for nouns you would say as a dimension is, product, company, date.

7:56
You know, these are things that we understand from a company perspective before putting on it. I want information about a customer.

8:03
It makes sense to have a dimension called Dem customer or dem Product.

8:08
And then fact tables that go with it. So a fact Table is an event and an organization I made a sale. Sale gets.

8:15
It’s recorded as a transaction in a database.

8:19
And we put them into fact tables in a dimensional model.

8:22
And from that perspective, we know that effect table has an event that’s associated with it.

8:28
So if I’m looking at fact sales orders, I know what that is, it’s an order that came in, and I have a certain date filter applied to that.

8:35
And if I want to see my customers breaking down into their orders, then I can I have a relationship between fact and dimension. And it’s easy enough to drag and drop.

8:46
In a tool, a third party tool looks made for it.

8:49
It is very performance so there is this process won’t go too much into at this point.

8:54
But de normalization, when we take something, then we make it into a customer table will be a very wide table with many columns. It’s not going to be a table that relates to another table. See an address table, which relates then to a state table, and then relates to, let’s say, a country tape.

9:13
So, in this case, we have one customer table, we move all those fields, Interact customer table, and we just have one relationship back to the fact table.

9:22
And that makes this highly performant, especially since the last 10 years or so, column store indexing, that’s come out. It’s made for that sort of process to be very fast.

9:34
Tracking history, that’s a technique that will show later on what we can track history within a dimension, and tie that history into a fact table, so that you don’t have to know about behind the scenes. What’s taken place.

9:46
These models are ready for ad hoc analysis. A lot of the tools are used to that dimensional fact table approach and they bring that model in.

9:54
And then you do a lot of drag and drop that’s using third party tools such as Power BI, Cognos, Qlik, Tableau, amongst others.

10:06
The first reason for having a model, and this goes back to the nineties when they said, You know, I don’t want, I want to make a report on something.

10:12
I wanted to, well, let’s just say an application, like Amazon, first came out, got a transaction database behind the scenes. Why go to place an order?

10:24
It had a very efficient database that took into account my order from the list of products, from my information, and created an order in their system, and that’s called transactional processing in the database world.

10:38
And it’s great for feeding an application. So if I call up and I’m saying, where’s my order? It’s not safe yet. Then, someone that looks on screen pulls up a form, says, I want to see this order number from Mr. Kinnier and then there it is. And those databases were built to make that process very efficient.

10:55
Now, the problem came about was when some analysts said, well, I want to see what this product has done over the last six months, and then I want to compare it to the same six month period from the prior year.

11:06
And so, someone would write a very complicated piece of code and then they would run it on the system. And Must in very large systems Then you’d have.

11:14
It may take a half-hour to run and during that half-hour, when people are trying to place orders, or customer service is trying to service requests, the system is now bogged down by that reporting requirement that query that’s running in the background.

11:30
And so for dementia, was taboo of the application side.

11:39
And so that it doesn’t interfere with people, basically, giving your money, right? Which is this, the idea behind the transactional database?

11:47
You want to spend money, your application.

11:52
Cost, as an interesting part of this so-called, has always been what a lot of people think is a downside to the data warehousing dimensional modeling process. And there is upfront. Yes, there is.

12:04
But time the reusability and the scalability of the government reduce the amount of time spent on query or development as important part.

12:14
I don’t know, had duration where, as an analyst company, you spend a lot of time pulling data into Excel and mashing it up with the lookouts, putting in calculations. And every week, I have to do this. Now, I have to pull this new data in.

12:32
It takes me a little time to do that. I’ve got a parcel it out to different people in the company so that they can see, there’s all X amount of time from people that should be spending their time business decisions. And that’s the purpose of analytics.

12:48
Is that, given the data, and I look at the data and data, and it’s something, say, We need to create a campaign now to hand one product line.

13:00
And now I’m going over, and I’ll make your presentation to the marketing part.

13:04
And that’s, what analytics should do for a company, should make the company better in some fashion or another.

13:11
And we want analysts to be spending their time thinking about that, to make those decisions, rather than working in terms of bringing all this data together, and enmeshing it up, which a lot of analysts do spend a lot of time on. So the model does handle that. And even though there is upfront cost, in the end, you should be much more efficient company because of the process.

13:34
And you should be a much more profitable company, because that will make it better, more timely business decisions, that ends up in more profit. So what is the cost of a dimensional modeling? And then how does that all work together?

13:53
So without a dimensional model.

13:57
You wouldn’t be able to do some of the historical changes, and we use these terms.

14:01
SCD 1, SCD 2 stance was slowly changing dimension, which tracks some of the historical changes, and there’s different types of that.

14:12
By formatting dimensions and facts, the user doesn’t know need to know about the program.

14:17
Just, once it’s done, it gets handled overnight and it’s done, and it’s all set to the fact tables and dimensions are tied together. So they use, it just has to just, drag and drop, and get that. Now if you are doing that also for file, or remember, files from a data lake or just cobbling data together, and typically you don’t have the ability to do that.

14:39
This is one of the big advantages of having Dementia model, is that ability to track historical changes over time.

14:47
Shared dimensions, another big part of the processes, with having different dimensions that hook into, having one dimension that hooks into many different business processes.

15:00
Fact tables are typically business processes.

15:03
A sales order, purchase, invent is real business models that hook into legal product dimension, and a single date dimension.

15:13
And because of that, we can report on those things over time and in the same report.

15:21
And it’s all, therefore, you saw it again, the whole idea of providing this to an analyst.

15:25
So that becomes drag and drop, filter Enslaves type of functionality.

15:30
Then the date dimension. And this is something that a lot of third party tools now have down pretty solid.

15:35
They have a way of, know, you can right click a date, I believe in Tableau, where you just say, Let me see the last six months.

15:41
Yeah, so everything’s relative to whatever you’re trying to do with the single date, and date dimension in the model is something that gives you a lot of flexibility. So I have a data on my fact table one date.

15:52
And I haven’t connected to the date dimension.

15:55
And so if I want to see analytics over the last four years, then I can choose calendar year, and have that, take a look at it.

16:04
And then, suit by month, I just take your out at crab Month field like pull that in, and it off it goes. So it’s very flexible. And with date dimension, you can have many, many columns. And the other part of it is, you know, it’s connected with one key.

16:21
You can have many fields that are normally not available, unless your business has particular reason for fiscal calendars are a good example.

16:35
What is a data model? What are we talking about? And I’ve told some terms around here. I hope everyone can understand.

16:43
But let me take a moment to, to work with some of that.

16:46
Uh, the data model versus what data warehouses?

16:51
So, a data model could be anything from the sense of what I mentioned before with Excel, if you pull in different sources into Excel and mash it up and create some calculations. That is a data model.

17:03
It is a simplistic one.

17:04
It’s done ad hoc, and but it’s still considered a data model.

17:09
Dimensional model takes it a step further.

17:14
It’s a very specific construct, dimensions and facts, and there are different types dimensional modeling, but standard way and the one the demo today is kimble method data model, which is comprised of dimensions and facts. Data Warehouse is a place where we store those dimensions and facts in the forms of tables in the database.

17:38
And when the data gets loaded, and it goes through many processes, the data warehouse, many processes and ETL or ELT, which stands for either extract, transform, and load, or as the other model is extract, load.

17:51
And transform, in the end, you end up with tables, and that are either dimensions or facts in a dimensional model. And they are database tables.

18:03
Or in the new data Lake house experience, that could be files, in a data lake.

18:12
All right.

18:12
But they are essentially when I say tables or files, we’re looking at columns and rows. And then all inter-related.

18:25
What a data warehouse is not. So, this is interestingly, now have the system of data warehousing, I’m sure. Many of you have been in a company where it says, oh, yeah, we have data in a data warehouse, we have a data warehouse for everything. How good you can go and take a look.

18:38
And you say, well, it’s not really a data warehouse, or something we would call an operational data store, which is to say, you can have a replicated version of the application database and have it off into its own data on server.

18:53
So, that doesn’t interfere with the application side, and we addressed that earlier. But the application side, you don’t want to run reports on the application database.

19:02
And so, the first thing, people thought, well, let’s get that over to a different server. There are replication technologies and databases. So that it can order gets placed into one system. It just does a second insert into the other system.

19:14
It’s very efficient way of handling keep the systems in sync, and then the reporting people would just report against the operational data store and not interfere with the application database.

19:29
And that’s very efficient for removing the load off the application side.

19:33
But a transactional database is not set up in the right way to handle analytics. And I guess the example that I mentioned before, if we had six, I’ll see what this product in the last six months and compare it to the six months at the same period last year. That’s, not an easy thing. And it’s something that you have to do some complicated SQL to write it. You have to do a lot of different things in order to do it. And then it could take some time as well.

20:02
We put with the dimensional model, we take that transactional data, we move it into tables, dimensions, and facts.

20:09
So that, particular process, which I just mentioned six months, over six months, it’s much more easier to do, hmm.

20:18
A place to store tables built to service one report. So this is again, what a data warehouse is not.

20:24
Yeah, People that have had what they call data warehouses.

20:27
And basically, let’s take that example up the last six months over the prior six months and you say, all right, well, it’s complicated SQL takes about a half hour to run.

20:37
I’m going to now run that overnight and then store the results in that database.

20:42
And that table will then service the report. So, when someone gets underway, but the next day, it’s a quick and easy report meeting directly off the table, doesn’t have to involve the complicated SQL.

20:55
So that’s also a good step in the right direction, from an operational data store perspective.

21:00
But what quickly happens is everyone tries to service all of these requirements, and so you get many, many tables, a lot of it, with duplicated data all over the place, and it kind of grows into a maintainable mess, a unmaintainable mess, I should say.

21:17
So, again, it’s not a place where you just storing constantly temporary tables, so they can service one report.

21:25
That’s not a data lake.

21:26
Data lake has very specific use case.

21:29
And it’s a great thing for us in the data world that this has come about. And we have a place to store any kind of data, whether it’s structured data or unstructured data, it could be videos or images or documents, things that don’t have a tabular structure, and we throw it into the data lake. And then, there are techniques that are made around that so that you can do analytics off of those type of data, this data types.

21:59
So, it is not, the data lake has a wonderful use, but it is not a data warehouse is not Data lake.

22:12
Go through this kind of quickly, Inmon versus Kimball in the data warehouse methodologies, so these two.

22:18
The first to come up with Data Warehousing in the mid- nineties and Kimball is pretty much emerged as the one that’s used most often. And we’re going to follow that today in our presentation.

22:27
I’m suggesting everyone if you want to get a little bit understanding just Google Inmon versus Kimball, and you’ll see plenty of good articles out there, that, which just the differences between the two data vault is also. It’s not quite data warehousing.

22:46
But it is a construct that is, has been made for larger organizations to kind of make scalability easier if you want to look into the data vault technologies as well. Please do.

22:56
For this we’re going to look at the Kimball Method Star schema approach. And that’s the idea of having a fact table one relation to each of the dimensions. Alright. So, that, again, reduces the, the relationships and, and the query.

23:10
Yeah.

23:14
Last word, here. So, just how complicated a query can be. It’s now much less complicated, and simplified when you only have the one relationship in here to worry about, And, with fact, tables, you can have many fact tables. Even though a star schema, as shown here, at one point in the middle, that kind of reaches out and start Gibson star schema. But, for the most part, with Data Warehouse, and you have many fact tables, that can connect to the same dimensions. And that gives you a good way to combine the different business processes and report upon them across shared dimensions.

23:53
OK, so Kimball methodology, Dimensions, are the nouns of the organization.

23:58
You have customer, product, employee, these are all nouns, and then within the nouns are these attributes. Now, a dimension is nothing more than a table in a database. We’re using it as terminology. We’re calling dimensions because it’s a certain constructed table. And an attribute is nothing more than a field within that table. But the ideas and an attribute describes the dimension.

24:21
So, if you can see here, first name is an attribute of the employee dimension, describes the employee.

24:28
Color, is an attribute of the product dimension.

24:32
All of that is, how we do dimensional modeling, is we try to identify the nouns within the company that that the reporting uses and what a company would be considered to give in terms of key performance indicators, how can we measure the success of our company, or the different parts of our company.

24:51
And, uh, dimensions and attributes are a big part of that construction, which allows us to look at the different aspects of the fact tables, because they are connected activities.

25:06
Now, the fact tables are events. So, what is going on in an organization?

25:12
As an event, sales Order gets placed on a certain date and time.

25:17
It has a certain amount.

25:19
I want to talk about measures after this. Purchase orders the same thing.

25:23
It’s, something that it gets placed a certain time, goes into a database with a time stamp on it.

25:30
Inventory, how much inventory do I have for a certain product? On this day?

25:34
Hiring and attrition are HR departments. Measures in terms of, you know, hiring more, or firing, or letting go people more.

25:46
And, how do we measure those over time?

25:49
The measures, in a fact table, what we want is we want to have keys that point to the dimensions that they’re associated to. But we also then have the measures themselves.

25:59
So for sales amount would be a critical piece to measurer.

26:05
Cost would be another one. So how much does this cost and how much is it always selling it for?

26:10
And they and the metrics that we can build off that quantity on hand for inventory and headcount for HR.

26:17
These are the measures that we determine inside a company as whether or not we’re doing well based on this.

26:26
When you do it dimensional model, you know, like to take the many fact tables with the shared dimension, so you list them out in this bus matrix to keep part of Kimball methodology.

26:35
One of the deliverables for every contract I’ve ever been on is this bus matrix, and it’s simply Excel.

26:42
With the column, here is the fact tables, or business processes listed across all your dimensions here.

26:49
And where they interact with one another, they get an X in the middle, so sales order has a date to it.

26:55
It has an item, and as customer, doesn’t have vendor, because all are vendor stuff whereby we’re going to buy items from, to make our products.

27:05
Alright, so I don’t need, vendor is not going to be associated with sales order, but down here bill of Materials. You’ll see vendor will be there because it’s that’s where the vendor comes into play. So we’re not going to connect to our sales order, but you can see here. There’s a number of places where there are shared dimensions item and everyone here and we should have an X here for date in terms of the cost of an item.

27:29
We have a fact table based on that.

27:31
But that’s key part of the dimensional modeling process.

27:35
The bus matrix, dimensional model allows, once you have those shared dimensions across the fact tables, you pull it into, let’s say, a third party tool, and you create a model out of it, and that model has relationships from the fact tables to the dimensions.

27:57
And analytics tools are typically equipped to handle this dimensional model.

28:02
They’ll identify if this a foreign key on a fact table, that’s the same as a primary key on a dimension.

28:10
And they’ll create the relationship for you.

28:12
And then once you have that relationship, then the third party tool should just be drag and drop.

28:19
Filter and slice, that makes analytics much more efficient from a reporting side, from an analytics side, so that people can do more analytics instead of building out the model itself.

28:37
The tracking history, this is one of the things we’re going to demo, I’m going to show you will slowly changing dimension looks like.

28:43
We’re going to show you how a customer changed over time, and how that got built into the process.

28:49
And then once it gets built into the process, the fact table then pulls the right keys to point to that dimension.

28:56
And once that’s done, the user doesn’t need to know about it. It’s just in there, and it’s working.

29:02
And that’s where a dimensional model adds a ton of value to the analytics, to the analyst itself, because, again, they don’t have to worry about complicated query to write or have someone write it for them.

29:17
Now, once the model has done this all tied together, we’re going to show the demo, how that adds a ton of value.

29:25
Alright, so, shared dimensions and Type two dimensions. All right?

29:29
I’m using Power BI as the third party tool, here.

29:34
It could be any of the tools that are out there, as I mentioned, Cognos Tableau, anything that you’d want to use from a dimensional model with.

29:45
And you can see here, I’ve built my own very small dimensional model.

29:49
But just to show how the sales and purchase orders are tied together, they both have a tie in to the date dimension.

29:58
When a purchase order is placed, it’s on a date. Same thing with the sales, it’s on a date. So I can track now.

30:04
If I have this common dimension date dimension, I should be able to see both metrics that are associated with each Fact table, and then sliced by date, or kit, in this case, sliced by product.

30:16
So we have product here, is part of both my sales and both my purchase orders.

30:21
And with that, I want to understand, how do I go about reporting metrics on that?

30:27
Now that I have my shared dimensional model, or set to go, let me go to this dashboard here.

30:35
And you can see, I have a list of products. They are books.

30:41
And S is all dummy data, so don’t look to find the source on this. I just made it up. But what it can show that as if I want to see fact sales on those books.

30:55
So I have an amount here this is my measure.

30:58
On the sales fact table, I just drag it on.

31:01
Now, you noticed, I’m not, I’m the analyst, and I want to see what’s going on here, not, as an analyst. I didn’t do all this work behind the scenes. That’s our data warehouse.

31:10
Our data warehouse populated the dimensions, our data warehouse, then populated the fact tables, and we built a model and say, a Cube, maybe in this case would be considered a Queue. And then now, that the analyst comes in and drags and drops, that’s all they do.

31:28
And, you can see the breakdown here. It’s very quick. It’s very performant. And you can see the amount for each one.

31:35
And if I wanted to, then see a separate business process, my purchase orders, and see how that’s doing, I just come over and I drag it again.

31:43
Now, if I had to write a SQL query behind this, or if I pulled, let’s say, a file on my data lake and pulled that into a third party tool, what would I have to do to set that up in order to produce something like this.

31:58
And that’s what where data model becomes very important.

32:01
We’ve given the analyst a playground to just drag and drop on this.

32:07
Now you can see with purchase amount and sales amount, I’ve broken it down by product because the relationship from product is on both of those factors.

32:18
I also have date here, so if I want to click on 2020 and just see 2020, and you can see the whole report here isn’t to react interacting with this particular thing.

32:29
So if I click on 2019, these are my sales and my purchases for 2019, for each of these.

32:36
That’s all what we did in the data warehouse. To load the dimensional model, load the data model.

32:42
But the analyst now doesn’t have to worry about.

32:45
And if you do that, now is the cost savings, again, if it’s a three month or six month project, and there’s a lot of costs associated with that, the cost starts to come back to you over time, because your analysts are spending time doing good analytics instead of cobbling together different data sources.

33:05
All right, so that’s the benefit of shared dimensions. And I’m going to move right into the type two, type one, type two.

33:13
Now, this is tracking, changes in history.

33:17
And we’re going to show a customer table.

33:19
Obviously, that’s not too many people in our customer table at this point, but it’s going to show a very important concept.

33:27
When we load our dimensions overnight, we have a business key that associated with, in this case, myself.

33:33
And every order that gets placed, has this business key for customer, says, it’s my ID in a certain system, All right? So, with that order, comes that business key.

33:44
And we always underscore business key in data warehouse to make sure that people know what that is. But as you can see, I’m in here three times.

33:52
And this is a primary key over here on just an integer, but it’s unique for every record.

33:57
And in data warehousing speak, we call that a surrogate key, which is why have this underscore S K at the end of it.

34:05
And when I load in, we’re trying to track changes.

34:09
So the system, every night when it loads and the customer dimension says this is the business key for AC Kinney, Hastens City of San Diego and the State of California.

34:22
And every night, it checks to see as this changed. And if it changes, what happens? So, in a Type two, I should mention, I should go back to step back and mention, Type one Attributes or CD one.

34:35
Type one is when you just override something, and you lose the history on it, and I guess someone’s phone number as a prime example.

34:43
If I had a phone number that was different in San Diego and different New Jersey, let’s say, there’s no real need to keep track, effect change. No one’s going to say, well, this is analytics, when he had this phone number, as opposed to when he had this moment.

34:57
So when I update that, if I had a phone number column here, and I changed, when I moved to New Jersey, I would want all three of my records to have that new phone number.

35:08
Because I don’t want someone to reporting of 2018 data, and try and contact me, and I still got that old phone number in there. That’s not useful. So, in that case, to type one attribute, you just overwrite the history, and it’s, it’s gone.

35:21
You can’t report on the earlier value up.

35:25
In this case, there’s no need to, so we overwrite, that’s a type one.

35:29
Dimension.

35:30
Type one, slowly changing dimension attribute, we should call.

35:35
Now, what city and state this is important, I want to know what type of books I’m buying, when I’m in this particular thanks for your marketing and your selling books, and you’re saying, well, San Diego might have a whole different set of books. We want a market than someone living in New Jersey.

35:51
And so, I want to track those numbers.

35:54
For one, he was living in New Jersey.

35:56
As compared to when he was living in San Diego.

35:59
And that’s take to change and that’s what this process is here and it’s one of the, the biggest value adds when you’re doing it dimensional model.

36:06
So, if this, overnight, when it sees I live in San Diego, California and it’s the start date of 2018 of 5 of 1.

36:17
And then all of a sudden, on 2019 of 7, 15, I moved to Los Angeles.

36:23
And so, on that night, when it’s processing, it says take the business key AC Kimmy and where does he live? I’ll wait, we’ve got to change here so, instead of stuff overwriting that change, instead of having one record and just overwriting San Diego, Los Angeles, I’m adding a new record in, and I’m putting a new date, start date for me.

36:42
And when this first happened, I would have an end date of 9999 12 31, an unreachable date. Just to show that, it’s going to be, it’s the current record.

36:53
We also have a flag here that says, well, that’s current or false.

36:57
And then, again, you can see on 4 30, 2021, I moved to Randolph, New Jersey. And now I’ve got a new record, a third record that comes in with the start date. And then now this record gets updated to this unreachable date.

37:11
And this gets set to true and that’s how my history gets kept in here.

37:16
The one thing that you would say as well, ah, now from doing analytics on this, I got to understand, you know what the date is and I have to pick the right record, so I’ve got to query that.

37:26
Well, let’s gets handled again in the data warehouse, and this gets handled in a fact, table processing manner.

37:32
So, in my fact table, I want to do, and I’m going to show that, I’m going to go back here for a second.

37:39
Well, I’m looking at fact tables here.

37:42
I’ve got my Facts: Sales Data.

37:46
And, these are my cells right now and you can see this an ID, primary key here, but there’s my customer surrogate key here.

37:55
And, if you remember, my keys are actually 1, 2, and 3.

37:59
So, my sales are all the ones and twos and threes and yeah and there’s other sales from other people.

38:07
But, when I built this fact table, I have a process overnight now, so it might dimensions are loaded. And my fact table processing goes. And it says, well, I have this business key AC Kinney. And I have a date of the sale.

38:20
And at that date, fills in between.

38:26
All right. So if at first state let me just grab that date.

38:30
First date here is 2018 o7 o3. And if I go back to that customer table, when I’m building my fact table, now, it says, OK, I know it’s this business key, but: where does the date fall in here? Alright, so the first one here is 2018. So it’s but it’s greater than, this and less than that. So I’m going to use this surrogate key on the fact table.

38:52
So, that’s important, the fact table gets built at night, and it brings in the right keys, right primary keys, or surrogate keys.

39:02
From the dimensions, it uses sales. So, here I have two sales when I lived in San Diego, because the time-stamp was between that.

39:12
Now I’ve grabbed a different surrogate key from when I lived in Los Angeles based on the time-stamp and my business key.

39:18
And then all the three is down here, are these two purchases of when I live in New Jersey.

39:26
And our fact table process picks up on that.

39:30
It says, All right, when I select this, let’s say, I have a source view for my fact table.

39:35
Alright? Sales Order source, and in that view, I’m doing some transformation, some cleansing different things like that, but in the end, it’s the records that come out.

39:44
And now I join it to my customer Dimension on the business key.

39:49
So I have my business key habits underscore bk at the end.

39:53
I’m joining my dimension on that business key, but I’m also joining it on when the data entered is greater than or equal to much start date and less than my end date.

40:03
And with that, in the select statement, I get the right surrogate key.

40:08
An important part of this, if this happens overnight, you don’t need to know anything about it. You don’t need to know the relationship of how the tables got populated.

40:17
It’s done, so once it’s done, what can I do with it? Well, does that help me?

40:22
So I’m looking at this table here, and it’s just the name of customers and the amounts that are here.

40:30
And if I come over to my customer dimension and now I want to see that.

40:36
I want to see the breakdown here on state as well.

40:40
Now, if I had overwritten the state value, I didn’t keep that history and all of my sales will show up in New Jersey.

40:49
And that’s not good for Analytics. I want to know what was file one.

40:53
So if I drop it in, you can see my, already, my amounts. When I lived in California versus i lived in New Jersey, because I’m tracking state as type two attributes.

41:05
And that’s built-in again, I just dragged and dropped, and I got accurate results. I didn’t need to know what took place behind the scenes.

41:11
That’s a huge value from dimensional model perspective, if I could feel confident that my, development team has done what they should do. And it’s been to aid and set it to production. And now I’ve got this wonderful data at the tip of my hands. And I don’t even know, need to know how it does it. It’s just doing it correctly, and, say, I want it to now take that next step, and break it down by city. So, you can see I have purchases in both Los Angeles and San Diego.

41:40
And these all can be looked at over time, too. Because we have a date dimension here.

41:46
If I want to put a date key.

41:53
Now I can see the dates of each of my purchases, and those of the other people, as well.

41:59
To all of these dates, anything from Los Angeles should be in that to 2019 timeframe.

42:06
You can see the San Diego ones for the early 2019 and 2218 stuff. And then in 2021, I made purchases living in Randolph, New Jersey.

42:16
So you can see that that relationship has given me the history, and allowing me to report on it in a number of different ways.

42:23
So it’s a huge value add for a dimensional model. People need to see this. When there.

42:29
What this can do, when they start the project, they need to have that example of, wow, if I do this like this, then we can just do drag and drop functionality.

42:40
I’m going to go to the next slide and talk about one more piece of the dimensional model. Very important date dimension just mentioned earlier, if you have a date dimension hooked into it, you can have the one key from the date into the date, dimension table, and then you can build out a ton of different things that work for you. You can have a lot of different columns that your business wants to see maybe weekend in date. Certainly the calendar year, in the fiscal calendar year, you want to build, and work days, and holidays, and to tell you got flags for workday, since I know my holiday schedule them off this Monday.

43:19
So if that’s a holiday, it’s set to Y, And then I’m going to exclude holidays when I’m comparing my numbers against, let’s say, a prior month or something to that effect. Anything that you can build into one date dimension.

43:34
Then relative date columns, and a lot of the third party tools that we’re looking at, all have that date, or date intelligence, built-in.

43:44
So you can do that type of things.

43:46
But I’m going to just show a quick demo on Date and see how.

43:53
So this is our date dimension here. You can see it’s one table has calendar, fields, month, quarter, year, all associated with the date field. That’s my key right there. And with that, you know, instead of having to write a SQL function that says, give me the year of the date field, it’s fairly easy one to deal in SQL. But you have to write it out, and you have to do it. So I don’t want to do it out, I want to drag it across and see it.

44:20
You can see I’ve got these years here.

44:23
And if I want to see the numbers across here, I can just drag them out, and then of these tools, you can then see it, maybe as a bar chart.

44:32
And since I have shared dimensions in there, and my date dimension is shared, my purchase orders as well, if I want to see purchase order amount, I just drop in it.

44:41
I don’t know if that’s a good visual image. Something like.

44:46
Alright, so I can see, right now, a lot more purchases and sales of 2018 was tough, but you can see things got better a little dip in 2020, and then 2021 was good. And then we’ll start off kind of a little lucky here. Maybe we should do something in terms of marketing and promotion. So that’s the type of thing that once the data is already in there, the things we can do.

45:08
And so you can also say as well, I don’t want to look at it by county, or I have a fiscal count all about budgeting based on fiscal count.

45:15
So let’s take the year out and I’m going to put the fit.

45:19
Financial. Yeah, that’s that.

45:21
Fiscal cowardice is their financial here.

45:24
I’ve put that in, that’s the wrong spot X axis.

45:29
So, now, I’m seeing it says 20 21, 22, because their fiscal counter starts in April, And I should just pile drivers across for this up here.

45:40
You can see that this one month name itself starts in April and it goes to the March, so the Fiscal count is different. And it crosses over calendar years to the listed as 21, 22, or 19 20.

45:56
And so, you have that ability and it does also, if you’re going to do, this might not be the right way to sort this out sort ascending. I want the right to left functionality going on there, so you can build that into your model to, so that even though it’s you can shorten that correct?

46:16
Correct way.

46:20
What else with date? So one of the things with let me just show you a piece here.

46:24
Then we’re going to add a field to the date Dimension Just it.

46:28
Something that’s a pretty simplistic date dimension.

46:31
What does it take to add something to it or adding it to a Power BI model should be added to?

46:45
Should be added to your data warehouse, which it will, but we’ll add it here, as a new column, date dimension, Power BI, as its way of doing things.

46:55
But if I wanted to add this, this particular calendar year month, maybe I want to see IT. Year, and then I can slice down to Month. But what if I want to see over the four years that I have, I want to see the year and the month. So I derive a new field, and I add it to my date dimension.

47:15
It looks like I’ve already put it in there so once you do that, let’s make it, demo.

47:25
Let’s do it, right.

47:27
I’ll add it as it different fields.

47:35
And so, now instead of using my financial year here, I’m going to use Calendar year month, drop it on here and now I’ve got it over from 2018 August and all the months of the purchases and sales all the way across.

47:49
And, again, that’s a value add, that you can take dimensions. Usually, you have hundreds of columns, even, because there’s so much you can do. We can the dates, what was the prior day, no firm.

48:02
So for today’s, If my date is today, and my prior days is yesterday, and that flag can change every night in your data warehouse, and you now have the ability to always just filter on the prior day. There’s a lot of things, depending on the business, and how they use data, which can all put it into the state dimension. And it becomes drag and drop, just like in this example, I added that in quickly, drag and drop. It just works.

48:39
OK, just to finish off here, there’s other pieces to, we’re not going to do it in terms of a demo. But there are different types of fact tables that you can have. Snapshot pack table is very useful for things like inventory and headcount in HR. Things where, what is my value for the particular day. Now, if I have 20 quantity of 20 for an item, one day, and then I’ve got 18 the next day because I sold two, but then I bought 20, so now I’ve got 38 and I every day I’m inserting those amounts. Those amounts don’t add up over time to give me an accurate picture, I don’t know, 20 plus 18 plus 30.

49:17
It’s not going to give me the right number of that, how much I have on hand, but snapshot tables in conjunction with, say, semi additive measures. And the Power BI, Other tools, can handle semi additive measures. Say, I’m not going to add it up over time, but I want to add it up over, let’s say, product. I want to be able to say how many products? One hand on this day?

49:41
But at the end of the month, if I want to see what my inventory was on a monthly basis, then I just want to see the last day of each month.

49:47
And those semi additive measures allow for that. There’s a lot of functionality that can be done with different types of fact tables, accumulating snapshot as another.

49:56
You’ll want to look that up, handles different updates for each entry, and many to many relationships. So those we showed you the star schema with, which is essentially the one relationship from Fact table to dimensions.

50:10
But there are certain situations that require a many to many relationship to be built in and handled. And there are dimensional modeling processes for that as well.

50:20
So just touching on a few of these other pieces to add to the value of having a dimensional model.

50:27
OK, and with that title, I’m going to hand that back to you this time.

50:33
OK, great. Thank you so much, Andy.

50:35
As in the scope of today’s Webinar, we have some offerings from Senturus to help you with your BI.

50:41
If you need any help with business requirements data modeling your cloud and data architecture, both modernizations and migrations, data governance and administration or report some visualizations. We can help out with all of those. So please reach out, We’ll have the kinds of information in a slide at the end.

51:02
Additional resources. Have you seen our website? We recently just updated it and we provide hundreds of free resources on there.

51:09
Go to the Knowledge Center on the website to get product reviews, tips, insider view’s demos, and learn more about upcoming events like today’s.

51:18
Speaking of upcoming events, we have a great one coming up in a few weeks.

51:22
We’ll talk about pairing Cognos with Power BI and or Tableau. That’s on July 14th. So please register on our website or use the link shown right there.

51:35
Just stick with me for a couple of seconds as we kind of go through a brief Senturus overview.

51:40
If you don’t know who we are, we concentrate on BI modernizations and migrations across the entire BI stack.

51:47
We provide a full spectrum of BI services, training in power BI, Cognos and Tableau, and proprietary software to accelerate bimodal and BI migrations.

51:59
We particularly shine in a hybrid BI environment with some of our tools that were are currently using today.

52:09
And, finally, this isn’t our first rodeo. We have been focused exclusively on business analytics for over 20 years, our team is split large enough to meet all your business analytic needs, yet small enough to provide personal attention.

52:27
And then, we are hiring. If you are interested in joining us, we’re always looking for skilled people in a number of positions, specifically a managing consultant. For the description of this position, as well as other openings, Check out the Careers link on our website.

52:46
Now, the fun stuff, Q and A If we’ve got just a couple of questions throughout the webinar today, I’m going to go through those.

52:54
What are some ways to connect multiple facts table, in a single model?

53:05
Yeah, so, and going back to that piece for the dementia model I showed that if you have common dimensions and something like customers, this, Well, Even product, right?

53:19
Product is almost involved with everything.

53:23
Now, there’s not for HR, but for product.

53:26
If you are manufacturing, retail, environment, it’s essential that you, when you build your data warehouse, that you combine it with as many fact tables as you can.

53:37
So, obviously, purchase orders, sales orders, inventory, these are the pieces that you have a separate fact table for each, and it has each of those has one relationship to the product dimension.

53:52
So, when you do, create your data warehouse, you’re pulling in, you’re loading, and defining and loading your dimensions first. And then fact tables will then have to check against those dimensions. Much like it is here, to get the business key, trying to highlight this piece here. So that’s my join.

54:12
To the customer dimension, we talking customer here, and it gives me the business key.

54:17
And if I’m doing something else for customers, say invoices.

54:21
I have a customer, and I’ll do the same thing left, join on invoice, and I’ll have the customer key. And then, the date in, and join to my invoice dimension in the same statement, here, and I’ll come up with an invoice key.

54:37
All right, so there are many dimensions that can be handled in, with multiple fact tables joining too many dimensions and I should, customers should be customer will be restating this one. Invoice would be a separate fact table that would join to the customer dimension, as well, so you would have two fact tables. Fact sales order, and fact invoice, and both of those would join back to the customer dimension.

55:07
Once that construct is in place, you are doing just what you have here.

55:13
In this demo, showing the amount from one and this is product here.

55:20
So products connected to both fact sales order and effect purchase order.

55:25
Then you can see this breakdown once that constructors is made.

55:32
Perfect. Thank you.

55:34
Another question about fact tables as well, do you build the fact table completely new by loading, or the load, only the changes, and what about cancelations? Or how do you show retroactive changes in a customer dimension?

55:47
Lots of lots of questions in one there.

55:49
Yeah, so there is a good.

55:51
Those are good data warehouse ETL questions. So the fact tables, you look to do the inserts, you’ll look to handle deletes so you bring in the data.

56:02
And typically the way, I’ve done it in the past is, you will handle the inserts differently than you handle updates and deletions.

56:11
And commonly, there’s not too many of the deletions.

56:15
There was one client where I had a Fact table that, you know, the even the same order could come in with a different amount. So it’s a change in the order, not necessarily.

56:27
A deletion, and then we handled that by adding a new row for that change.

56:34
So if it was 100 of, for the amount than that, or to change before it’s shipped. So they changed it 100 from 60. We’ve put in a new row there, with the change in it and the change. That’s how the company wanted that reported.

56:50
February was 100 quantity of 100 of a particular item.

56:55
Then they wanted that report in February, and they want it negative 40 reported in March, because now February shortage and closed, and that we have to handle what went into, in March numbers. And while we have to detract 40, because the state change the order before it’s shipped.

57:13
So there are different techniques to handle, depending on the business processes and how the companies work.

57:19
But yes, inserts, updates, deletes, they all need to be handled. They need to be handled efficiently, which it can be challenging at times, depending on how much control you have over the source systems. So it is a good question.

57:31
It’s something that I think, from a specific perspective, you have to come up with those techniques in order to handle it for your situation.

57:43
Last question is kind of a unique one. There are different modeling approaches based on whether the database is row or a columnar database, aka like SQL Server versus vertical slash redshift.

57:58
So, this question, is there any difference to the way you model based on whether the database is like a row, standard, row based database like SQL Server versus, like a columnar database, like Redshift?

58:11
Yes, Power BI is also a columnar database, and by that, it means that they are the old way of indexing is this row based indexing. It’s kind of the standard way and done for a while.

58:23
Technologies, in the last 15 years or so, have shifted to columnar indexes, which just, say, I might have two billion rows in a fact table.

58:32
But I’m only have 20 customers, right? They’re buying lots and lots of stuff.

58:37
And with 200 customers, you can index those 200 more efficiently across two billion rows.

58:46
So instead of having two billion an index, two billion rows, an index built off to cut, and that index, just in the column, thanks.

58:56
Now, national models, one of the reasons they came up with the form of indexing, at least new that’s 15 years ago, was because of that Model Data warehouse fixing that was created for transactional or application databases Will, based, was very efficient, and worked very well for that. But again, when you change your idea, we using this database, not a report, see this six months compared to the prior six months?

59:21
If you have a column, the mixing going to be much more efficient, and in part Kimball method was big for that coming about.

59:31
So yes, especially in each one of these things. If I have a product, and I could have 300,000 rows and a product dimension. But if I have six colors that are available for these products, then it’s going to index it on those six in that column. And it’s much more efficient to handle it.

59:52
Great, Thank you.

59:54
So, we’re at the top of the hour. I want to thank Andy for his time today in this presentation. I also want to thank everyone for joining us today, tuning in. I’m going to leave the Q and A panel open a little bit longer in case you have any questions that just came up. Or you wanted to put one more, and we will fill these out and post them on the website for a later date. So if you have any questions, go ahead and put them in the next couple minutes. And then I’ll end the session and we’ll post those, like I said, in a couple of days. Thank you.

Connect with Senturus

Sign up to be notified about our upcoming events

Back to top