Many BI Tools. One Semantic Layer in the Cloud.

The proliferation of BI and analytics tools running in organizations today – each with its own set of measures and metadata – is the BI equivalent of too many cooks in the kitchen. The disparate models create a governance issue that results in inconsistent metrics. This BI soup gets especially messy when organizations go to move their data to the cloud. Those disparate models, while problematic, represent institutional value and need to be preserved. If only there was a way to standardize that metadata and have all the BI tools pulling from the same place.  

Watch this on-demand webinar to learn about the universal semantic layer solution from AtScale. Sitting between data consumers and cloud data sources, AtScale enables centralized governance and consistency.  

Senturus Principal BI Architect Pedro Ining hosts AtScale CTO and Founder Dave Mariani for a demo and conversation about AtScale. They will discuss how it provides a single source of governed metrics and analysis dimensions that can be accessed from all the popular BI tools. You will learn about 

  • Creating a universal semantic layer to support BI platforms including Power BI, Tableau, Cognos and Excel
  • Avoiding proprietary BI data extracts by using a live connection to your cloud data platforms including Databricks, Snowflake, Google BigQuery, Microsoft Azure Synapse and Amazon Redshift 

Presenters

Pedro Ining Dave Mariani
Principal Business Intelligence Architect
Senturus
CTO and Founder
AtScale

Questions log

Q: Can we import legacy Cognos Framework Manager models into AtScale without having to recreate semantic layers?
A: There are importers for SSAS and Tableau, but unfortunately, there aren’t any importers for Cognos models. However, AtScale has a modeling markup language that you could use to write a Cognos model translator.

Q: Is it possible to have a multilingual semantic layer?
A: This question could have two meanings, we address them both. The AtScale semantic model supports multi-byte, so it supports different languages. In addition, we could argue that a core requirement for a semantic layer is multilingual. In AtScale’s case, it supports several inbound dialects beyond SQL, including MDX, DAX, REST and Python. On the outbound side, AtScale supports a variety of data warehouses including Snowflake, Databricks, BigQuery, Redshift, Azure Synapse, SQL Server (SQL DB), Postgres and more.

Q: Can we import XML from Cognos data model tools like Erwin into AtScale?
A: Currently, Cognos does not have an importer for data modeling tools like Erwin. However, AtScale has a modeling markup language that you could use to write a custom model translator.

Q: What are the criteria for determining if a BI tool can connect to AtScale?
A: For tools that speak SQL, if they support a Hive ODBC/JDBC driver they can connect to AtScale. Tools that speak MDX, DAX, Python or REST can also talk to AtScale.

Q: Is QuickSight on your integration?
A: We have not certified QuickSight but since it supports connecting to Hive using JDBC or ODBC it should be able to connect to AtScale.

Machine transcript

1:52
Hello everyone and welcome to today’s Senturus webinar on university or university. Look at me on universal semantic layers, many BI tools, one semantic layer in the cloud to rule them all. And it’s actually a joint webinar today we’re going to be doing a demo here both on the Cognos and Power BI side by using Atscale. So thanks for joining us. A little bit of housekeeping before we begin.

2:19
Number one, if you have questions or comments during the webinar, go ahead and click the Q&A button at the bottom of your screen. You’ll get a little Q&A panel there. You can enter any questions or comments into that section. We also have a chat window. You won’t be able to send stuff in through the chat window. It just gets a little confusing on the webinar to do that. But you might see some notes from our team in chat as we go along.

2:46
One of the first questions that we get on almost every webinar is can I get a copy of the slide deck? And the answer is always yes, you can get a copy of that deck at Senturus.com/resources. And Andrea from our events team has posted a direct link into the chat. So you can just click on that link and grab a copy of the slide deck anytime. There also will be a recording of the webinar available next week, but the deck is available right now.

3:13
All right. Let’s do a quick bit about the agenda. Today, we’re going to do some quick introductions of our presenters. We’ll talk about the current semantic layer processes that are commonly used. We’ll talk a bit about the universal semantic layer and the benefits that offers. Talk a little bit about additional resources and Senturus as a company. And then we’ll close out with some Q&A. Our presenters today are Pedro Inning. Pedro is a Principal BI Architect here at Senturus.

3:42
He’s been in the business for about 20 years and he’s been through it all. He’s seen the evolution of BI through lots of iterations, everything through Cognos Power BI, Tableau, micro strategy. Pedro has been there and done that. We also have Dave Mariani here. Dave is the CTO and founder of that scale and Dave is going to be giving us a demo today.

4:05
Of that scale as a universal semantic layer which allows you to connect multiple BI tools to 1 central source of data. All right, we’re going to do a quick couple of polls here and then I’m going to turn things over to Pedro. The first poll is where is your data now? So do you have your do you keep your data in the cloud? Is it on premises or is it a combination? You know, we see a lot of variation nowadays.

4:35
Of course, everybody was on Prem Once Upon a time. We do see a lot of people moving to the cloud and a lot of our customers are in various stages of migrating data out to the cloud. So I’m just going to keep the poll open here maybe for another 20 or 30 seconds. So far, it looks like about 1/3 of you are still all on Prem with your data, about half of you are hybrid. That’s not surprising.

5:01
And about 15% so far are pure cloud in terms of the storage of your data. So I’ll just leave this going a little bit longer to give everybody a chance to jump in with their votes and I’ll get and close this out in a moment and share those results so everybody can see. So again, we came up just a little under half of you are hybrid.

5:31
A little over a third are all on Prem and 16%, whatever that is, is a fraction. About 16% of you are pure cloud. Let’s go ahead and jump to the next poll. This will be our second and final poll.

5:51
Question here is what are the concepts that are being discussed in your organization today related to semantic layers and so the object and you can select as many of these as are applicable for your organization. We’ve got headless BI, semantic layers and metric stores. Everybody’s kind of charging along here with your answers. It looks pretty well actually yeah, we’re kind of, it’s mostly semantic layer that I’m seeing here, a little bit of headless BI.

6:19
A little bit of metric store, actually about 1/3 of you, we’ve got none. So I would, I would guess that means a lot of you are just curious to know what is a universal semantic layer and how can it help me, how can it help my organization? And those are some of the answers we’re going to get today in the webinar. Looks like most of you have answered the polls. So I’m going to go ahead and close that out, share the results here for a moment.

6:48
So again, about 1/3 of you, none of these concepts are being discussed actively yet. I would say in your organization, a little over half of you are talking about semantic layers and then a little bit of fabulous BI and metric stores. So I’ll go ahead and stop sharing that. And at this point, I’m going to hand it over to our own Pedro inning. And Pedro, I’ll let you take it away. Thanks, Steve. Hey, welcome, everybody.

7:18
So we’re here to talk about the concept of universal semantic layer, but let’s get a grounding definition of the semantic layer itself. But if you look in Wikipedia, this says the semantic layer is a business representation of corporate data supposed to help end users. Business analysts access this data autonomously using common business terms, right?

7:41
It supposed to map complex data into familiar business terms like for example, our product dimension, customer dimensions, wherever you unified consolidated view of data across the organization. So that is kind of like the goal, the place where people would like to be right. And the semantic layer is supposed to relieve the business analysts from all that complexity like how do I join the tables and all those kind of things, what do I call it?

8:10
And then ultimately the end user will have a consistent that of analysis that they could do with the right business terms, right calculations, etcetera. So what is the issue now, semantic layers? Well, there is a proliferation of analytic tools even within the context of this webinar, we’re talking about three or four for example. We have a lot of cognitive customers, right. So a lot of cognitive customers have their own semantic layer there.

8:38
Power BI, Tableau, whatever else is out there, they generally have a different way to map their BI tool to the back end to create a semantic layer. And now we have a lot of movement of our data warehouses, our analytic data to the cloud onto Snowflake Bigquery, your synapse, right where there’s a movement towards that.

9:05
And we have all these different semantic layers that are still in the organization today and they also have to be moved up. So wouldn’t it be a nice if we could actually get a standardized semantic layer so that all these different BI tools can pull from the same model? I think a lot of you understand that within a large organization, you’re going to have a lot of different BI tools, the attempt to probably standardize on one.

9:31
Is a hard thing to do. So we’re going to have different ways to analyze that data. So for today’s discussion, we’re going to start off with, let’s just do a quick exercise and how that current semantic layer process is done. Some of you may have never seen it done, some because a lot of these semantic layers might be created by architects or other people within your organization, right. Then we’re going to talk about how do we move this to a universal semantic layer in the cloud.

10:00
Via the Atscale product, Dave Mariano will present that he will actually do a demo with the tool and he’s going to create a cloud based semantic layer with the product. And Atscale after he creates that layer, we’re going to actually use 3 BI tools against the semantic layer and show how that is actually queried so that you can see that each tool although they’re different BI tools.

10:29
We’ll see the same semantic layer as you interface with it. So what does that traditional semantic layer creation process entail? Well, you have to learn the source schema. So if you have a data warehouse for example, you got tables, you have to know the relationships, you have to know the measures that are in that in that schema. So a lot of business analysts might be thrown against a raw database, whether that’s in the cloud and.

10:57
Or that’s on Prem. Whether it’s a really fancy snowflake system or Azure, you’ve still got to know the tables and relationships in that source system or schema. And then we generally again model the schema with these BI tools with proprietary methods. For example, In Cognos, IBM Cognos, the workhorse has always been Framework Manager. Now they’ve changed it a little bit. They’re giving you the option now to do web-based data modelling with data modules.

11:24
You have Power BI. You generally start with the Power BI Desktop tool. You model the semantic layer within that tool. And of course, with Tableau, same concept. You start with Tableau Desktop. You bring in tables and you still got to establish those kinds of relationships and even excel. You still point to raw tables and you have to do some a little bit of modelling within that tool as well. Okay then after you do the modelling.

11:52
You generally publish the data model, the data model out to your users and you might even create the concept of datasets where you create extracts with for example Tableau, with Tableau data extracts with Cognos we create datasets because maybe the performance is not good against those databases, right? I only want a subset for a really quick dashboard. Wouldn’t be cool if we could actually maybe not have to use.

12:20
Extract method of data sets and still get that same performance using say an AS scale product with the universal semantic layer. So the result is again multiple semantic layers, possible different business names. So you create one semantic layer in Cognos, you create another in Power BI, and you might call one field something and another field another. Same field in this and the other semantic layer a different business name, different metrics.

12:50
Okay. You could have the same metrics but maybe but different definitions of the metric. Meaning you call one thing in another or different calculations in one and then a different one. Same calculation the same name in another semantic layer, but it has a different back end calculation. So calculations that differ. Also we have possibly hierarchies. Using to find hierarchy is 1 tool but not in the other tool. Marketing has a hierarchy in Tableau.

13:20
And Cognos. The Cognos metadata layer does not have it in another tool and they make up different hierarchies. And maybe they’re incorrect because one user fine hierarchy does not match the other one. So we have a problem where data governance is difficult to maintain and they eventually diverge over time. And again, you’re not going to get the same answer sometimes when you compare results. So another way of looking at it from this graphic. This is a swim lane type of.

13:49
Diagram over here again, multiple tools, multiple metadata layers. The top on the left side we have our nice data warehouse in the cloud. Now if we’ve got Cognos, we generally create a Cognos FM package or Cognos data module. We publish data sets and other data modules. So you have this chaining effect eventually to create your output, which are dashboards and reports. If you’re working down the Tableau swim lane here same concept.

14:19
And in fact, maybe you’re actually pasting manual SQL into that Tableau modelling tool. And maybe all you’re doing here is just using the modelling tool to create tablet data extracts. Maybe you’re playing multiple tablet data extracts. Sometimes we have clients and all they do is create 20 extracts every day and not really leveraging the power of a metadata layer, just create all these extracts and publish the data source and Tableau workbooks. Also same kind of happens along the Power BI swim lane as well.

14:47
Their data sets. Again, the common thing across all those is that metadata conformance can be inconsistent across all those three types of BI products when you have three different metadata layers. So ultimately, why you shouldn’t recreate metadata across multiple systems? Obviously it compromises data integrity, performance can suffer.

15:14
Particularly if business analysts don’t do the joins properly, etcetera. It’s time consuming. You know, 3 BI tools, 3 metadata layers, very time consuming process. It can compromise the data security because maybe 1 metadata layer does not have the data security, but another one does. And of course you have this concept of data silos. So what I’m going to do is just do a demo.

15:41
Using a standard data set or data warehouse schema, many of you might have already seen it out there. It’s called the wide world importers data warehouse sample. We’re just going to take a few dimension tables and a fact table, let’s do in Cognos in part BI just to kind of level set how the typical metadata modelling piece works. And this particular source data warehouse database is on Azure. We’re going to create that semantic layer. We’re going to create it with Cognos using data modules.

16:10
And we’ll just show you how it works in RBI. So I’m going to go over here and get all my get out of my presentation mode over here. And here is our Cognos interface. So typically what we would do is create a brand new data module. And for those of you who are using Cognos and have you use it for quite a while and are in the framework manager world, this is kind of a new data module, web-based modelling tool.

16:40
That Cognos has put out there. I’m going to go ahead and point to my database in the cloud and say OK, then select some tables next, OK, I have to know the tables I want to pick. I’m the analyst doing this. I know maybe I want city customer, I need a date dimension, I need a stock item and I want to go against the sales fact and I say OK.

17:08
And right away I see a couple little red warning messages and because up there on our database they’ve set up primary keys, relationships there, August is saying it brought this particular key in here, doc item here as a measure. I have to clean that up. I got to take that and make that an identifier and get rid of that error message and then I’m basically left and this is really the core.

17:36
Well, what people have to do is they’ve got a bunch of tables. They got a fact table routed by a bunch of dimension tables. OK? And then what we have to do, we have to establish the relationships of these tables. For example, from date to sale, I have to create a relationship. I got to go to date and I got to go to my invoice date key. I have to know that this field is matching properly with this field over here. OK?

18:05
I got to go maybe to my city to my sales relationship and swap that out city over here City key the city key match, it say OK customer to sale relationship, Customer to sale, Customer key to customer key match OK stock item.

18:30
Sale very relationship Stock item to sale. So this is my product dimension, dock item and match selected columns and say OK so I did it fairly quickly. It’s a fairly simple schema but for the more complex schemas and or maybe cables that aren’t labelled properly that could be a lot of danger there because this this join here for the date dimension or city to your fact table.

18:59
Could maybe be done improperly slightly wrong, right? And you get the wrong answer on the other end. And then also what I might do here, I might go here and create a calculation for example. So I’m going to say create a calculation. So I’m now adding my adding to my metadata. I’m going to create a calculation and going to take profit and I’m going to go here and divide it by my sales over here. It’s called.

19:27
A total including tax. And I’m going to call that maybe profit percent. OK. So now I’ve extended my metadata layer to include this, OK And then I’m going to maybe hide some keys here so that they don’t see that, OK. And I’m going to save this off. I’m going to save it as my new semantic layer and my content over here.

19:57
Demo folder, and I’m going to call it WI Demo. See. OK, so I’ve created it over here and ultimately what’s going to happen is you’re going to query against it. And maybe this is here’s a dashboard that is going against that particular schema. You can see it over here. These are the ones I’ve modelled and I’ve produced that. OK, so that’s one example of that.

20:26
But then if I’m a Power BI person against the same data source, if I bring up my Power BI tool, I going to do the same exercise, right? I going to go in, I going to go to get data or I going to unclose out. I’m going to go import data from SQL Server. Hey, put my server name in.

20:57
Hey I got to connect to the database tables. I got to know where to go worldwide importers DW not worldwide importers. I got to pick my tables customer, date, doc, item, back sale and look.

21:18
These Bibi tools proprietary in their way of doing things and in in this particular example there were primary keys and relationships on a set up there on our database once it loads over here oh, I’m think I picked the wrong actually me cancel I did the wrong choice. You can do that again get data.

21:46
So, Pedro, you did the evil. You did the evil import mode. The evil import mode, which creates a copy of your data in Power BI, right? It creates a data extract, which is what we’re trying to avoid here. So let’s go and do this again. All right, let me do this again. That’s actually a good little segue. But Dave, you’re going to talk about that a little bit too. Let’s go actually have to go. Cancel.

22:15
Get data, click on that. I think it’s getting stuck in import mode right now. It’s a it’s a good demonstration Pedro. About just what the you know how difficult it can be right? Cuz yeah let me now you’re having to serve both as a data modeler and as the person who is doing their analytics, building dashboards and building reports.

22:44
Which is a lot to ask, right? For someone you’ve been doing this for years. It doesn’t really help too much in terms of getting data and analytics out to more people who don’t have the skills that you do doing what you’re doing right now exactly. So RBI can be dropping the desktop pretty easily, right? And I’m an analyst, right? And I’ve been given access to the data warehouse is exactly what I have over here.

23:10
The thing I did here is I did data activity mode as import. I’m going to go Direct query. So as an analyst I’m going to go in here and then somebody’s telling me to go to this data warehouse over here and I’m going to pick these tables, date and then stock item.

23:36
And then say, I’ve not got a  whole bunch of other tables there as well, but for this demonstration it’s going to bring these guys in, OK. And now it’s just evaluating basically we’re creating the actual steam here. All right. Now if I go over here to the pane, now we see the tables over here, OK, which is kind of what we, showed over here in our demo over here again as you can see.

24:06
2 BI tools two ways to create a metadata, two ways to create calculations. I would probably enhance and not actually add more calculations here as well. But then there’s part of the danger, right? What I define as a calculation here with the possibly the same name. I might create the same name calculobi here with a slight difference. So somebody who’s running a report out of Cognos with the same fields and the same potential calc might get the same might get this a different answer.

24:36
I’m here in Power BI. OK. So that’s kind of sets the groundwork over here, Dave. I’m going to hand it off to you and you and Dave will now show you how we could do that with Atscale using a universal semantic layer. I’m going to stop sharing. Great. Thanks. Thanks, Pedro. So let me. So, Pedro, that was a really great demonstration of, you know, just the kind of burden we put on our users here to.

25:06
To actually just go out and just build a darn dashboard and you know, and you by talking to SQL Server, you had actually primary keys when you moved to the cloud with cloud data warehouses like Google Big query and Snowflake and the like or even a lake house architecture like on data bricks, you know having primary keys relationships is not always a given. So you make it even more difficult for your consumers to actually get that, get out their data.

25:34
So let’s let let’s talk about let’s talk about how we can simplify that and in first let’s just I’ll put a graphic to the problem that Pedro was really talking about we like to talk about managed knowledge gravity here and you know there’s increasingly people are moving to the cloud we saw it in our in our.

25:56
We saw that in our panel in our question that there was a lot of people that were still on Prem obviously looking to migrate to the cloud or in both places and that just creates a lot of shifting gravity when it comes to data. It’s not likely to be in one place.

26:14
Most of you answered the poll question of being in hybrid mode which is really kind of reality it could be hybrid either on Prem or the cloud or it could be hybrid could be hybrid across multi clouds. So, we definitely have a situation where data is scattered and with SASS applications.

26:38
Like Salesforce and ServiceNow for example. We also sort of increase that fragmentation across not just our own clouds but other people’s clouds at the same time. You know, Pedro showed all the different sort of just you know just four of the types of tools that you might want to access data with. Well, this is really fragmenting as well. So we have fragmentation on both, both fronts. We have fragmentation in terms of the types of tools people want to use to be able to do their analytics.

27:07
The type of data that that consumers need to go after to get up and do those analytics and it creates a lot of complexity and we really are requiring business users to really become data experts just so that they could get their job done. So that’s where really the semantic layer comes in. The semantic layer really brings knowledge gravity to in place by serving as that universal control plane.

27:35
For publishing and accessing data wherever it may live and in any way the consumers want to access it. So regardless of whether using Power BI or Cognos or Tableau or what have you, you’re going to see your profits always going to be profit. And whether profit is coming from a SQL Server or Google Big query, consumers don’t need to know. They’re never going to have to connect straight to a database.

28:04
So the semantic layer really is, and this gets back to our other poll question today, there’s different ways of talking about a semantic layer. I like to think of a semantic layer as really a synonym for headless BI. So we’re really talking about the same thing where we’re it, we’re basically separating the consumption or insight layer from the data layer by having this platform that sits in between.

28:31
Now we talked also about metric store, which is another sort of term used to describe a semantic layer. We think of that as being a core service of a semantic layer. But I’m going to show you today that we’re going to be doing, we’re going to be creating some the data model, the semantic model. You saw Pedro do that. But that’s going to be separate and distinct and it could be the same person to both model the data as well as consume the data.

28:59
But likely that’s going to be different people. And that means we’re going to have a lot more people who are going to be able to run analytics, because you don’t have to be able to be a modeler and a visualization or dashboard builder to get your job done. I’m going to show you how a good semantic layer will actually manage and tune performance on the fly.

29:20
And I’ll show you that through what we call aggregates for how for, for how we’re going to learn from query behaviour and make these data platforms, the data layer on the left, much faster and smarter without having to do data extracts. And you’re also going to see how you can govern your data because now we have one place with which to govern every single query regardless of where it’s coming from. So I’m going to start today with a demo.

29:50
Where I’m going to 1st, I’m going to wear two hats today I’m going to be the data steward where I’m going to create that model very similar to what you saw Pedro do in Cognos. But I’m going to do that once in Atscale design center, that’s our modelling environment. And then I’m going to publish that model and I’ll show you consuming that same semantic model that it will build here live for you.

30:14
And I’ll show you how we can actually consume it in a bunch of different tools. I’m going to show you Tableau, Excel and Power BI and then I’m going to hand it back to Pedro because we’re working without a net here. We’re doing this live, we’re not faking it. And Pedro is going to connect Cognos to the that scale semantic layer and show you how you can do the same analytics that we did on the on the other tools. So here’s the demo. We’re going to look at these, look at these different consumption tools.

30:43
Atscale is a very thin layer that is installed in your VPC. It’s just a VM that is intercepting logical queries from the different consumers, turning those logical queries into real live data warehouse queries. Today I’m doing this on Google Bigquery and on simultaneously Atscale and the semantic layer will actually create aggregate tables in Bigquery.

31:11
And use those aggregate tables to speed performance. So this is how we’re able to use a universal semantic layer and still allow direct connect or direct query live query without having to resort to data extracts like Pedro did accidentally with Power BI. Okay. So enough said, enough slides, let’s get to doing some real work here.

31:38
So I’m going to 1st show you how the data model looks when it’s all done. So here’s our here’s our model. You can see it’s a it’s obviously a multi fact model where we have sales, we have movement orders and all my dimensions and my measures are over here are showing in my preview panel. So what you see here is you see all of my dimensions as well as my hierarchies all spelled out. Very easy to consume.

32:08
So what we’re going to do is we’re going to rebuild the model and we’re not going to rebuild the whole model. We’ll do exactly what Pedro just did. And so you can see how it works and then we’re going to go and consume that model. So let’s, let’s go ahead and let’s get busy and do that. So I’m going to go really fast because I want to give Pedro some time to sort of wrap things up. So we’re going to create a new model and we’re going to call it Pedro importers. So you know, I’m not cheating here.

32:38
OK so we created a new model called Pedro Importers. I’m going to go to my canvas and here is my blank canvas. So I’m going to start to build up this preview here by going and 1st going to my well here and looking at my data sources. Now I have a number of different data sources configured in this case you can see all my schemas here. And Pedro started with fax sale so I’ll pull that onto my canvas.

33:05
And he created a measure. He created a profit measure. So I’m going to go and do that by dragging and dropping the profit column onto my measured panel. So now that’s now a profit. I’m not going to really change anything here. But Atscale, does all the kind of stuff that needs to happen, it’ll choose aggregation methods. It’ll let you put things into folders if you like. I’m just going to go ahead and plop it there. And so now you see, I have my very first measure called profit in my model.

33:36
I also want to do, I want to do my sales, which happens to be called in the table total including tax. So I’m going to drop that in there, but that’s a bad name. I’m going to name it sales, which is what it really is. And so now I have Profit and Sales okay. So that is example of a simple, very simple transformation. Now what about my dimensions? Well, I can go ahead and add and build my dimensions on the fly.

34:05
But I can also use our library. So the library allows you to reuse conformed dimensions across different models without having to reinvent the wheel. So for example, I can take my date, my date dimension and drag it onto my canvas and you can see it has a couple different calendars. It has a fiscal calendar and it and a Gregorian calendar.

34:34
I definitely want to take my geography and because I definitely want to see where my customers are coming from, I’ll drag that on. I definitely want to look at my customers. So I’ll bring that onto my canvas and let’s see. Pedro also did stock items, so let’s do that. OK. So now watch what happens. These are green because these are models unto themselves and you can see these models.

34:59
Have themselves their own hierarchies and definitions and levels defined. So if I double click on Geography, you can see that Geography City is my core data and you can see I’ve defined two different hierarchies here. So all that means is that I can actually start to connect these up really easily. So for example I can do things like my geography I’ll hook it up with by city.

35:26
So I’ll just create that relationship like Pedro did and watch what happened. You now see there’s a folder called Location and you can see I’ve got my whole hierarchy that’s been inherited. It inherited it because all I needed to do was to create that relationship. I didn’t have to recreate the reinvent the wheel. There you can see that I have my customer dimension. So let’s go and connect that up and you can see now I have my customers.

35:54
And now I have my customer names, the where they live, what their category is, what their primary contact, All that came for free without having to reinvent it. What else do I have? I buy my stock item. Let’s do that. And now I got my stock items here and a bunch of different attributes for how I want to look at my stock. All kinds of different things. Again, not having to reinvent the wheel there. So what about my dates? Well, I have a couple different dates, right? I have an invoice date.

36:24
And I have a delivery date. Well, I can just go ahead and hook up and do what we call role-playing by just putting a template here. And now I created my delivery date, and I’ll do the same thing with my invoice date. And just like that, I now have a calendar dates folder. And there I have I have my different hierarchies that have been inherited right there.

36:53
Okay. So the one last thing that I’ll do before I go on and start to change hats to a consumer is that I’ll go ahead and add another, a calculation. So what Pedro added was a profit percent. So let’s go ahead and define that. I’ll call it profit percent and I’ll just go ahead and write it. And our profit percent is profit divided by sales put my put my division in there.

37:24
And test that. And just like that you see now profits percent is now in my preview panel. So all this left to do is to start to query this. So I’m going to go back to my project here and I’m going to publish it. And as I publish it, that’s just going to push it out to the Atscale server, which it did and now I’m ready to go.

37:50
OK, so I’m done being a I’m done being a modeler and now I’m just going to turn tables here. I’m going to start with Tableau and then I’m going to go to Power BI and Excel and show you exactly what it looks like to a consumer who is working with that Pedro Importers semantic layer that I just I just created. So here’s my Tableau Desktop.

38:18
I’m going to go ahead and log in. I’m going to log in using my credentials. Of course, this could be my Active Directory credentials. Whatever directory service you happen to be using at your company. And I’m logging in. I’m not creating a data extract. This is all going to be live. So what you see here is there’s Pedro importers, just to show you we’re not cheating.

38:48
You can also see that we’re connecting live. So but I have all of my. I have all of my metadata here without having to remodel. So now all I need to do is to is to get busy and run some queries. There’s my profit. Remember I had my different dates here. Let’s look at the delivery date. We can start and looking at my profit by year and look, I can just drill down.

39:15
Tableau is working exactly like I expect it to work, very intuitive and if I want I can take that profit percent, let’s put it into my colours. And just like that, you know we now have and I’ll just, I’ll just go back here, let’s do this, let’s do a better visualization. Let’s just go back to here and then I’ll do, I’ll do my profit, profit into my colour and you can see how much my profit percent is.

39:44
So there’s my, There’s my first example of using. I didn’t have to. I didn’t have to remodel the data at all. I didn’t even have to go to that Data Sources tab. So Pedro in Borders has already been done already, already precooked for me. So let’s go back now and I want to show you now what I think we can do with Excel. So everybody knows Excel. Everybody uses Excel, of course, and Excel typically is a dumping ground. IT hates it.

40:13
Because people abuse Excel, but everything ends up being in Excel anyway, so I’m going to likely. I’m actually going to create a live pivot table right now using the built in Analysis Services connector that already comes with Excel. So I’m just I don’t need to dump data in here. All I need to do is go ahead and connect to that worldwide importers and look.

40:38
Pedro Importers now is the equivalent of an Analysis Services cube and just like that I can now go ahead and run my analytics and I can run my analytics right here in in in Excel and treat Excel just like a BI tool and drill down and do what I want.

41:01
And if I am a real power user with Excel, I’m not going to want to just stick with a pivot table. I’m going to want to convert those that pivot table into actually sell references. So you see now these are now cube values which can now live anywhere I want in my workbook and I don’t have to worry about cutting and pasting all I need to do. When I’m ready. If I want new data and fresh data, I just hit refresh.

41:28
It makes a live connection to Bigquery and just like that we are. We got updated data in our Excel workbook. Okay. So let’s bring it home and let’s do one last thing. I’m going to go and connect now to Power BI. I’m going to do it in much the same way you saw me do Excel. I’m not going to do any kind of import mode. I’m going to go ahead and reference my live so you can see it. I’m connecting live.

41:58
Again I’m connected to the Analysis services connector and I’m going to find my worldwide importers project. There’s Pedro there you are Pedro as a as now a again as an Analysis Services cube. If you see what I just did here there’s all my metrics that I just I just I just created and queried and you can see now we could says it’s live connected. So this is not there’s no modelling that I had to do there.

42:26
All this is happening as a live query without having to do any extra data prep. So if I look what just happened there by connecting through DAX like Atscale does, that model has been inherited from the from the Atscale modeler. So I didn’t have to, didn’t have to go over and do any kind of data prep all over again. It’s already been done for me.

42:53
So with that, let’s go and see what happened behind the scenes and then I’ll hand it back to  Pedro. So how did that magic really happen? So let’s go and let’s do and let’s look at my queries here. And I’m. This is back in design centre now. So look at all the queries we ran against Pedro importers. These were all live. And look how fast these queries were running. 7 milliseconds? 4 milliseconds? 23 milliseconds.

43:22
Why are they so fast? They’re so fast because Pedro importers began its humble life in Tableau and you can see that we were we had, we had no aggregates being used by the 4th or 5th query Atscale was taking that MDX query and substituting our fact sales using an aggregate instead. So by doing that we’re able to deliver performance.

43:49
That is really fast and that’s because the Atscale semantic layer learned and tuned queries on the fly. And just like that we went from data prep on a new model without moving data, without any ETL to consuming it live right before your eyes. So with that Pedro, I’m going to stop sharing and I’m going to give it back to you and see what you could do with Cognos.

44:19
All right, you guys see my screen. Okay, Cognos land, Okay. So now we’re going to try Cognos. So I know we probably have a lot of Cognos customers out over there. So I had created that data module earlier against the live SQL Server database. What we’re going to do is go to the Manage section, go to data server connections, and we created a data server connection here to the call demo I’m going to go into here.

44:45
And the little trick here with this is when you click on the actual connection and click on the ellipse button over here, you’ll see something called Assets. You bring up assets. We should see the cubes that Dave has available out there and you can see it right here. Pedro Importers is available and if I click on that over here, what the step in Cognos is create a data module.

45:10
And where do you want to create it? I’m going to put it into my content and we’ll just call it Pedro app scale for example, and then save it said this is creating message kind of goes away. What happens is it’s starting to read everything in from the app scale server and then you can see it was saved successfully, right? I’m going to say done over here.

45:38
And just a little on the details of this connection. If you look at the details over here on properties, you’ll see that I’m using on the connection a Microsoft Analysis Services connection within Cognos and it’s using the HTTP XLMA protocol to get to that. So that’s basically what we’ve done. This is basically the same URL that Dave had over there too. So Cognos is connecting to the Atscale server.

46:07
So what does that look like? Now let’s go over here and close that out. K and I go to my content and I’m going to add scale. And what we’re going to do is here is pager Atscale, right. If I actually create a dashboard against this guy go here, it’s going to read and there it is, pagers, importers, and if you recall.

46:33
That date had created a customer dimension, delivery date, Geography, Invoice date and stock up Item and measures. There’s your profits, sales and profit percent. Here’s your invoice date, there’s your geography, and under location you’ll see all the different hierarchies and members, etcetera. And for those of you in Cognos who’ve done dashboard and reporting against cubes and even of the old school transformer or pointed Cognos to Microsoft Analysis Services.

47:03
You’re going to get this kind of look, it’s basically kind of like an OLAP look, right? You’ll see the hierarchies and you’ll see the measures over here. So it’s all pre done, right? So if you remember that dashboard I had up here worldwide, importers could actually copy this. We’ll try to create a couple measures off of that over here on the dashboard and put that little header. If you didn’t know that you can copy little things like that from dashboard to dashboard with In Cognos.

47:30
Okay. And one thing we had up there was total profit and total sales in a summary. So I’m going to put the summary visualization over here, Okay and I’m going to drag profit into the value, Okay. But now it’s going up there and actually reading the value up there and ask you and there is your measure and I might do another one here for sales profit.

47:56
Profit and sales. And let me get the other summary visualization up here. Let me just put a year for now, OK. And then sales, I’m dragging that over here, OK. And then maybe I want to filter that there’s your filters area and then over here on invoice date, I’m going to filter it by calendar year.

48:23
In voice calendar year, I’m going to drag it here to the filter section. It’s going to read that dimension, give me the list of values. I’m going to pick 2015. OK. So 27 million and 62.1 in sales. OK. And then from a visualization perspective, we also had a line graph. We’ll put a line graph over here. the Y axis is going to be sales.

48:51
And the X axis is going to be calendar month again, rendering pretty quick. This is all live going against that that ask scale server, right put up here and put up here. And if you compare that again to the live dashboard 2762, there’s your line graph. You also had a bar graph which we can add there too. Okay so.

49:19
In that moment in time, within the last 45 minutes or so, I as an analyst on Cognos have now pointed to what Dave has created up there on the ASK Scale server. Dave has pointed Power BI and pointed Tableau and had the same look and feel here. And Cognos now also has the same look and feel and all these definitions, proper percent, etcetera are all the same across 3 BI tools. So I think that’s pretty cool. I think that’s pretty cool.

49:49
All right. So that was that. Let’s finish up our presentation over here, swap my settings real quick. OK, We’re going to go through this and we went through all. I’m sorry, I have to skip through this. And we’re here, right, Steve, I’m going to look get it back to that. That looks perfect. Thanks, Pedro. And thanks, Dave.

50:22
For your demos, just a quick bit here before we go into our Q&A session. If you’re interested in getting some additional resources, go ahead and check out our website at Senturus.com. You’ll find not only the slide deck for this webinar, but slide decks for all of our past webinars, blog posts, tips and tricks, product reviews, just a whole wealth of information. We’ve been committed to sharing our BI expertise for a very long time, so check out Senturus.com/resources.

50:52
For lots of free and valuable info, Want to talk a little bit about upcoming events? We’ve got a couple more webinars not too far out here. On June 15th. We’ve got cool Power BI functionality that you may not know about, so if you’re the Power BI shop or thinking about heading in that direction, check that out. Wednesday, June 21st, we’ve got another chat with Pat featuring our own Pat Powers.

51:19
That’ll be on building data modules in Cognos and you saw some data modules here today in Pedro’s demo, so that’s June 21st. You can register for any and all of our webinars at senturus.com/events. Just a little quick background about Senturus. We are focused on modern BI. We shine in hybrid environments. You know, nowadays we see a lot of folks who have Cognos plus Power BI, Cognos plus Tableau.

51:47
You know it’s now more the exception than the rule to see an organization that has just a single BI tool. So we’re happy to help you with your single or hybrid environments. We’ve been in business 22 years, 1400 plus clients, 3000 plus projects. So we’ve seen a lot, we’ve done a lot, we’re a boutique firm.

52:15
Are small enough to provide personal attention, but big enough to cover all of your BI needs, so don’t hesitate to reach out to us for anything you may need in terms of help with your BI environments. We’re also hiring, so if you’re interested, we’re currently looking for a managing consultant as well as a Senior Microsoft BI Consultant, and if one of those roles is of interest to you, you can send your resume to [email protected]. You can also find.

52:43
More details on our careers page. And now, as promised, we’ll go into some live Q&A here with Dave and Pedro. We’ve got quite a few questions that came in during the session, so I’m just going to kind of go through and kind of combine some of the questions that are related.

53:05
There were a couple questions actually that have already been covered. So you know there was a question about does that scale work with Cognos. Of course we saw the demo that Pedro did there at the end. So yes, the answer is emphatically yes that that scale does work with Cognos. We’ve got a question we oh, there was a question about role-playing dimensions and Dave, I think in your demo you did a role-playing example, there was some of the calendar data.

53:31
So yeah, for sure, role-playing is a key part to building the model is like, is role-playing. So you can role play any dimension, not just a time dimension. Great. And yeah, and I’m going to just kind of go through some of the detailed questions here. There are a couple of great kind of broader questions that I’m going to leave until the end. There was a question here about during your demo.

53:55
Dave, did you have to define the hierarchies for your model, or were those already like predefined somewhere in the background or in the tables? Nelson asked a question about hierarchies.

54:06
Yeah, for the sake of time, I they were. I pulled those dimensions which include the hierarchies, which also include any kind of security around those hierarchies, onto the campus. So those were in my library, so I had predefined them in my library. But building defining the hierarchies is just dragging and dropping.

54:28
You can create a hierarchy by right mouse buttoning and then you can drop in your hierarchies by dropping in your columns into the panel that I showed you that was we call the dimension panel. Great thanks Dave. There’s also a question here about how do end users get to the model connection strings they need for different tools. I assume this is just a question of if your if your modelling is done in out scale.

54:54
Because that scale produced the connection strings that you need for Tableau versus Power BI or is that pretty straightforward? I mean, it’s a little bit out of my knowledge area. Yeah, that’s a great question. So typically in terms of you’re asking the question of kind of findability, so how do you find those models? We have very nice integrations with the enterprise data catalogs like Elation and Calibra.

55:20
And so a lot of our customers will just find the data through those data catalogs and then those, those connection strings will be right there. Otherwise, you know they’ll, you know they will, they’ll get that either from the internal website or through e-mail. When it comes to authentication, we’re authenticating with the, you know, customer’s own directory service, so like Active Directory. So there’s no need to manage users.

55:46
It’s just a matter of the of the IT deciding, you know, which users can see which data, which groups can see the models, and that’s all defined, you know, within that scale using existing directory groups and directory users. Great. Actually that is a good segue into another question here, which is which is sort of horizontally related. There’s a question about real level security and call level security.

56:15
And whether A&E groups can be reused within that scale is a way to achieve row level security. Yeah, we call them security dimensions, which is how we apply row level security. So what we can do is you can define basically using your directory groups or your directory users, you can define rules for who gets to see what in the dimensional model.

56:41
In in in my typical demo I usually use an example of region level security, so somebody from the West can’t see sales in the east. And so you can define those rules just using your directory users and groups, and then you can attach those rules for example to the geography dimension and then whenever the geography dimension is used in a model.

57:07
That row level security will automatically apply and get kicked in for every query regardless of whether you’re coming from Excel, Power BI, Cognos what have you and you know in terms of column level security, we call that perspectives. If you are a Sequel Server Analysis Services fan, it’s similar to that concept where you can show and hide columns and create different, basically virtual views on the same model so you don’t have to actually create different models.

57:36
You just create different lenses on top of those models and we call those perspectives. So that’s how we deliver both the row level security and column level security. Well, the questions are coming in fast and furious here, Dave, so I’m going to keep feeding some to you.

57:56
Let’s see how about. I actually Suneesh asks about whether you can write SQL directly in Atscale. I would assume the answer there is yes, but I’ll let you. Yeah. Again, given the shortness of the demo, you can create calculations, calculated columns. You saw me create a calculated measure with MDX. I can also create a calculated column with SQL and it’s just using the SQL of the native database.

58:22
And whatever I’m connected to, it’s not a different dialect. In my case, I was connected to Google Bigquery, so I can write snippets of Bigquery and include those as what we call calculated columns. We can also include the concept of almost like a view.

58:38
So rather than a data set where I pull in a table, I can also define a query as the table and then Atscale will then allow you to model on that query as opposed to a physical table. So absolutely that’s how the transformations are defined without having to physically move data or use ETL to transform data. It’s all done virtually.

59:05
And all those calculations are applied at query time. Great. Thanks, Dave. So we’re getting low on time here. So I’m just going to take a couple more questions out of here that are kind of broader questions. Yelena asked about the performance. If so, what is the effect on performance when you put a universal semantic layer in between your BI tools and your data?

59:30
Well, the key is that a semantic layer needs to have a performance optimization function that comes with it. Because you know, just we know that if you connect, if I were to connect Power BI directly to the Google Bigquery with anything more than 10 or 20 million rows, we would have to resort to imports or extracts and which means we’re going to copy data out. So the semantic layer needs to be not just.

59:57
Not just equivalent to the performance of the data warehouse, it needs to be much better. So what we were doing and I tried to show you really quickly is that this are the Atscale semantic layer was creating aggregate tables in Bigquery and then using those aggregate tables instead of the raw tables. So we provided a boost in speed both by accessing smaller sets of data to give to get the answers back.

1:00:24
And you might have also noticed that some of those queries were being accelerated through a in memory cache. So for frequently accessed aggregates we also cache those aggregates, which means that’s how we were able to deliver those queries to excel. Some of them were under 5 milliseconds. So absolutely it has to be, but not just as good, but performance needs to be much better, otherwise people will avoid the semantic layer.

1:00:53
And go right back to import and extract mode, meaning they’re going to be creating data models in the BI tools themselves. Exactly what we don’t want to happen. Great. So I’m going to take that to one last quick question. Here we are at the top of the hour, so we’re going to need to wrap up. But a couple folks did ask about.

1:01:16
Can a modeler modify the out scale models at the tool level? Can you add measures? Can you can you make changes in Power BI and or Tableau? And I think you’ve sort of addressed that here just now, but I think the answer is yes you can but it ends up defeating some of your purpose of having a universal semantic layer.

Connect with Senturus

Sign up to be notified about our upcoming events

Back to top