Cognos Data Module Architectures & Use Cases
Pros, Cons & Real-World Scenarios
When the subject of Cognos Analytics data modules comes up, the discussion usually turns to what they are and how to use them. But what are the best architectures to support those use cases?
In this on-demand webinar, we focus on the potential architectures and use cases for Cognos data modules. With an eye to new implementations, we look at data module architectures that do not exclusively rely on the legacy Framework Manager tool.
- Data module architectures
- IT-driven data modules
- End-user driven data modules
- Concepts of data set libraries
- Review of real-world data module use cases
- Current pros and cons
- Current data module gaps as compared to FM and other modeling use cases
Principle BI Analytics Architect
Pedro joined Senturus in 2010 and brings over 20 years of BI and data warehousing experience to his role. He’s a veteran data architect, developed his skills as the BI industry evolved through multiple iterations of BI platforms including Cognos, MicroStrategy and Tableau.
Greetings everyone and welcome to the latest installment of the Senturus Knowledge Series.
Today we’ll be discussing Cognos Data Module Architectures and Use Cases. Before we get into the heart of the presentation,
a few housekeeping items Please feel free to use the GoToWebinar control panel to make the session interactive. While we have all of the attendees muted out of courtesy to our presenter, we do encourage you to submit questions in the questions section, as you can see on the slide.
We generally try to respond to all the questions while the webinar is in progress, so stay tuned. We do that at the very end.
If for some reason, however, we’re unable to cover all of the questions, or we don’t know the answer off the top of our heads, we will complete a written response document that we’ll post on
senturus.com, along with the presentation. So the first question we get early and throughout the presentation is, can I get a copy of the deck?
And the answer is an unqualified, absolutely.
It is available presently on senturus.com. Go to the resources tab, and then within the Resources library.
It’s a great place to bookmark as it has tons of great valuable content addressing a wide variety of business analytics topics.
Alternatively, the link has also been placed in the GoToWebinar control panel, and you simply click that, our agenda today, after some quick introductions, going into the heart of the presentation, covering Cognos data module architectures.
Talking about dataset, libraries cover some specific use cases, then, again, stick around for informative, but a brief Senturus’s overview and free additional resources to get to the Q and A at the end.
So joining me today is Mister Pedro Ining on the Senturus backend.
He has over two decades of data warehousing experience that he brings to this role, he’s a veteran data architect developing his skills as the BI industry and evolved through multiple iterations and BI platforms, including Cognos MicroStrategy and Tableau. My name is Mike Weinhauer I’m a director here at Senturus.
I’ve been designing, selling and delivering BI solutions for I’ll just say in excess of 25 years, and among my many roles at Senturus is hosting our webinar knowledge series.
As usual, before we go, just before we get into the heart of the presentation, we’d like to get our finger on the pulse of our clients and the attendees of the webinar, so, to that end, I’m going to launch our poll.
Today’s question is, which version of Cognos are you using?
Select one of those. Are you using 11.1. or somewhere between 11 and 11.1.6 or version 10 or earlier.
The rationale for that is that data modules didn’t exist in version 10 earlier?
So that would tell us, you know, your relative level of sophistication with it 11.1.1.
Let’s see, we’ve got about 70% of people voting here. Go ahead and get your votes in.
So it looks like the preponderance of you closing down two thirds are somewhere in the middle.
Good chunk. You all are on the latest and greatest, though. And then we got a small percentage on 10 or earlier.
So interesting. More on 10, but probably good to see that. That’s the case. Thanks for sharing the information with us.
Always interesting to see what are audience is doing. And with that, I’m going to hand the floor over to Pedro.
It’s all yours. All right. Thank you, everybody.
So, Cognos data modules architectures and use cases. But, before we get into that, let’s just ground set, we’ve got a couple of our webinars are already out there that focus on Cognos Data Models versus Framework Manager and self-service. We’re going to talk about architectures today, but let’s ground ourselves a little bit for those people who may be coming in cold.
What are data modules?
So, Cognos data modules came out with 11, it’s really a web based, and user focused data blending modeling transformation tool with Cognos
11, allowing end users to be able to do their own modeling within the Cognos tools set, without having to use framework manager.
It’s really IBM’s response to this term of data democratization.
Other tools, like Tableau and Microsoft Power BI, now allow end users to do their own modeling, bring in their own spreadsheets, point to a variety of different data sources, And generally, in the 10 data series, and below, IBM was more of an IT centric type of product, where IT would have to do the modeling for you, create the FM model, and publish the package. And there’s very little that end users can do to enhance that model, or actually do their own.
So, with new tools like Tableau and Power BI, some of the first things that those tools wants you to do, is point to a data source and do the modeling yourself, And that allows you to get a lot of power out of end user, self-service modeling techniques. So, I believe that I have, in my opinion, 11.9 now, through 11.1.7. This is really IBM coming up to par with some of these other tools that are out there in the BI industry right now.
The release 7, 11.1 is the latter as the latest release coming out.
It came out, actually, last summer. 11.2 is coming out end of March, April timeframe.
But, there are at least seven, definitely, significantly closer, less some of the technical gaps between the Framework Manager and data modules.
And I think you’re going to see a lot of continued enhancement in this particular toolset within the Cognos Analytics series.
And, another point here, all future development resources are going to be focused on data module specific enhancements.
The Framework Manager tool that was used prior for most modeling activities will be supported alone. They probably will never be ever deprecated to a point where they want. They will take it out of the product.
But a lot of the new enhancements are going to be done in data modules themselves.
So, before I move on to the architectures, I’m sure there’s a lot of people who are asking, you know, what our data modules still missing. There’s probably a lot of IT folks out there who are using FM. I’m going to get this out of the way real quickly here before we get into the higher level topic.
Just to tell you what’s not there right now.
For those of you who are using Framework Manager to do dimensionally modeled relational, I think that maybe there’s a lot of people still using it, but I think that particular niche of FM modeling is going down and use. The DMR.
The DMR piece is not supported in data models right now.
One other aspect, object based security, basically allows you to secure parts of the model, like tables and things like that, and fields, based on security, is not currently supported.
Team based modeling, is not supported, but, uh, from my opinion, we haven’t used that a lot, and it is more of a user based modeling tool.
But if you’re asking that question, can you do the branches and emerge of team based modeling that’s in FM, best, currently, not there.
And parameter maps, there’s some advanced FM modeling that’s done with multilingual packages and maybe very complex row level security implementations. In parameter maps.
That’s not there.
We’re hoping to see some of this get picked up.
In the end, we did a whole webinar on FM verses data modules on the Senturus website, we did talk about how you can somewhat simulate the FM style namespaces and packages.
I would encourage you to go there and take a look at that as well.
The last one here is interesting, because we actually found this out on one of our projects, where we were trying to implement data modules.
The search and select prompt for reporting there. You can see on the right, that’s used quite a bit when you’re creating a river or a static report, and trying to do some prompting with a search to select. For some reason, that’s currently not supported if it is a data module source.
So, if you have very strict report implementations that maybe need that, you might have to think of another type of prompting scenario that you might want to use. So, we found that out a couple months ago, which was kind of interesting, and we started using these things, so, just some things to think about. I kind of outline them here. These are some of them.
There’s always that whole FM versus data modules. How would you do this in data modules?
I would encourage you to take a look at the other webinar for that.
But for this webinar, what we want to talk about are some potential data module architectures.
I think a lot of the evolution of data module, just, how do we use it? You know, what does it do? That’s a very new kind of concept.
And how does it compare to FM? What are the features, all the different features, data models are very specific, technically.
But what we would like to really kind of talk about today is maybe some potential key architectures that can use, that you can use to implement data modules.
And, as we kind of worked through, the marketplace, worked through some of our clients to look through some of our use cases. We’ve kind of come up with, maybe it’s basically kind of like, three categories of wear.
You might use these type of architectures there as the IT, we’re calling it the IT Driven Enterprise Model, and we’ll talk to this, the hybrid model and really appear end user driven model because Cognos, we’re basically has lot of legacy.
Right? It’s been around since the nineties.
It’s gone through Cognos Report Net diagnose a Cognos 10 fabric model, different architectures that have been implemented.
And now we have this new, did a modular architecture. And what we had a lot of legacy out there.
And a lot of folks have a lot of FM packages and things like that, they’re still out there.
So as we drive through the hour, let’s talk about the classic FM model for just the baseline.
Generally, you IT will gather the report requirements from the end users, they’ll develop the framework manager model there.
In a silo Sometimes the local will go away for a Cup Maslow create the model based on some of these user requirements.
Then it publishes the package from their Fabric model and it creates data reports from the package.
Then, of course, your users can create their own Ad Hoc Analysis and reports from the package. But that whole layer of Cognos package and framework metal manager model development is typically hidden from the users and it’s kind of a black box, OK?
So, what we’re trying to get to is, we can kind of still kind of use that model, in a sense with data modules.
I think there’s still a lot of value in having this kind of model apply in a data modular framework.
So, yes, we can replace the use of FM with data modules.
Because when we all try a wrestler, what does subject? Well, should I replace my FM models with data models?
But we want to kind of parameter, tries a little bit more so for new modeling projects against new data sources.
Yes, we can use Data modules in that respect, but they’re driven more by business requirements versus say modeling the entire database, the title, the entire ERP, creating one massive FM model are creating one mass of data module against that source, OK?
So, they’re meant to be kind of done and maybe smaller subject area bites, right if we use that concept.
So for existing legacy FM models, the question comes will I should I reverse engineer my entire FM Model two data modules?
And generally, I think the answer is, is basically no. I think what you have, you have a lot of FM models out there, and they’re working fine for you.
Either they got hundreds of reports against them, but there could be areas, subject areas, and as FM models that really need to be redesigned because a lot of these FM models have been around for 10 to 15 years. They’ve grown. Their namespaces in there with the current developers have no real understanding of why they are the way they are.
And users use these things maybe from a self-service perspective and get very lost and what, what they are.
So, if you look at it from that perspective, you could select business relevant subject areas in that FM model that could be redesigned for better self-service.
You could take that approach, and I would recommend taking that approach, versus doing a full reverse engineering of something like that.
So, when we do this, we create that data module, For example, IT will do it, in this particular, Enterprise driven mode.
We make that data module read only.
And it’s controlled by IT, So, for those of you who are concerned about, you know, data governance, and people are creating data models directly against data sources, and you have data joins, and complicated data relationships. That you don’t want to go crazy.
We could create the module for the users.
Make it read only in a sense, and have a controlled by I T.
Then end user models can link to this module, or create datasets for their own modeling purposes, like what I’m showing over here, so you have, you have users over here in the top.
Will, maybe we’ll write reports or do self-service against this?
Maybe better model the data module?
But it’s IT control. They can’t change it. The relationships. And everything is controllable IT.
They can also do a LinkedIn model. And this is where I want to break out and show you a little bit of what that could potentially look like.
So let’s go over to our cognizant environment over here.
I’m going to go to my content. I have some modules already created. Let’s look at this one here. This is a starting point.
This is, by the way, release seven.
And what we have here in the data module editor, is maybe the starting point of what IT will have.
We, they usually kind of have to bring these tables. And this is your typical Go sales data source.
And what IT will probably have to do is clean this up a bit to make it easier for end users, to work with them when, one thing we have here is like, a detail had a relationship over here, and we would maybe want to create, similar to FM, emerge Croix’s subject.
So, in this concept here, we have the creation of custom tables, which is a view, and we create this view, and this is called Sales fact, for example.
OK, so, now I have the sales fact over here.
This is where I want users to really work with. The relationships are already kind of embedded over here.
That view is going to inherit that relationship that we have over here, and I don’t want people to see the physical layer when people are writing reports off of this.
So, what we could do then is go ahead and just hide this from users, and in essence, I’ve created this namespace, and this is the presentation model namespace, then what will happen is IT will continually model this more for end users, create other views, which, for example, this whole product line, this kind of third, normal form, product line relationship, should be then, de normalized into a product dimension, will add more tables.
Uh, here, I T will go in, or the central model will bring, for example, in the time dimension.
And we have to create role playing dimensions off of this, in order to satisfy the relationship, to where we have to slice and dice by ship date, or to date, the things like that.
So, let me close this out, and bring in something that is already kind of pre done from a piezo presentation model perspective. In the interest of time.
I’m going to go over here, and I’m opening up this data module, which can be represented as the presentation by IT.
That, as you just kind of look at, this is maintained by IT, OK, so, they’ve kind of built this out even more.
OK, we, have, now, from a Relations, let’s look at the relationships perspective here.
They’ve actually created a ship date dimension for you, the order date dimension, the Product dimension. And with data modules, we can see how those kind of are built out. So, if we go to this custom tables tab, which I believe is now added, and release 6 or 5.
And we can actually see that these are views. And if we look at this, for example, ship date dimension.
It’s based off the time dimension table.
The time Dimension table is in the physical layer.
And, same thing with Order date, and the product dimensions now is basically a view, or will stay with the concept of FM, emerge Query subject across the four, OK.
So, now we’ve got this physical layer hidden, we’ve actually created this presentation layer nicely, for you over here.
So, users can go ahead and create reports and things off of that.
And it’s controlled by IT in a central model.
Now, what users can also do is, we can create this, uh, link data module off of this. Because, what we want to be able to do is add things to this in our own way.
Um, so, I could actually go over here and say, Create Data Module.
And as you can see here, now, this is my data module, but everything is kind of linked.
Is Terrorism Kreme, OK, I can still see these tables, but any changes that happen over here, IT makes will then flow its way over here into the linked module that I have.
So, I’m going to go over here and save this module.
We’re going to save it as into here.
My area here for architectures here and tap, we’re going to call this link data module.
OK, now I have my linked data module.
Over here, and my temp folder.
And I could actually create a report off of this.
And when I look at it, as you can see, it looks just like that physical IT module.
So, if I create a report off of this, let’s go ahead and create a report list.
And we’ll go, sales fact, and we’ll go Quantity, and we’ll go Product Line.
And write a report.
Let’s go ahead and run that.
I basically have report off my module.
That’s been linked.
OK, I’m going to go ahead and save that report.
Into Link Report.
OK, Now, the question is, I’ve got now my link module on my link report.
What if IT makes changes to that IT module?
But go over here into the … Area module.
And IT decides to go ahead and rename this too Baby quantity.
And they go ahead and want to add a new table, and we’ll just, they go over here and drag this table to make available for, for the appraisal layer.
There we go branch, OK?
So, then IT is going to go ahead and save that.
Now, I’m going to go over here and go to my temp area.
I’m going to go to my link report.
And let’s see if we run this report.
It runs fine, Nino’s hasn’t changed, now, we’ve got a lot of the caching issue here.
Eventually, that Q two, I label will, change, can reflect, um, the name that was in that change.
So, if we look at the link data module over here, you can see that it’s inherited the QT UI.
And have a go over here to right. Let’s try that link. Report again.
There. You see that Q T Y. That is inherited that changed. It didn’t break the report.
It inherited a chain, because this is really more of a label.
And the actual ID of that field is actually stored in the report.
So, that’s just really showing you a regression change that happened at the module level from IT, Going up into the link data module, and then into the report itself.
And, I can go in and go to my own link data module.
And see what has been added.
So, if I go over here, and go to my sources, so, there’s that branch table that was added, my IT, and there’s a new feature here that was in, I believe, really seven actually, which shows you unused items.
And it shows it highlights it for you said, oh, you have everything that the base IT model has except branch.
So, now I could bring that in and link to it as well, make my changes over here into this module.
So, let me go back to my presentation.
What I was trying to show there is that IT is the, basically the owner of this data module.
OK, and, uh, they can do all the changes over here. Users kindled write reports directly against this without worrying how to do all the joins and try to recreate the module from scratch.
OK, and maybe this is a representation of an FM model, A subject area that, a model valid used to be an FM package that now has been made into a data module.
And then the users can also create their own data module that’s linked to here.
It will still allow users to inherit changes here from this.
And then add other things to this link module.
Which kind of brings us to another area that we could look at. So, this was kind of that, and IT driven enterprise model. Let’s look at another model that we could talk about.
Like, I call this hybrid in a sense, because, what we’re doing over here is, we want to leverage existing legacy IT, maintained FM packages, this yellow box over here.
So this tries to address the question of, I’ve got no 100, 100 FM packages over there have been developed over time.
They’re still good relationship, still are still good. There’s still good data in there from a metaphor? None of that Meta data model perspective. Do I really have to re-invent the wheel on this thing? Part of this package can be integrated into a user or IOT data module. We could leverage that.
The legacy package here could also be used to create new data sets to be brought into a Mod Data module.
Then, L two, then we are allowed then to create these other data modules, which is a hybrid of basically modeling directly.
Leveraging FM packages that are IT maintained also.
Or, and also allows you to bring in your own uploaded spreadsheet files into this data module itself.
So let’s do a little quick demo on that.
I’m going to go back here to My Home.
I’m going to go here, and I’m going to open up, this, what I’m calling my data module, Hybrid, which is basically, the one that you talked about, where it was kind of modeled by I T.
But I have a scenario, or I own this model module, for example, but I want to actually bring in data from a package. So, how would that work?
One scenario is basically, we’re going to add a new source.
We’re going to go over here to the samples folder.
Here we go, samples.
And we’re going to go over two models where we have our Go sales query, package, and the famous Go sales query package.
So what it shows you now is basically, kind of a reference pointers green, because you linked your LinkedIn us go sales query.
If you expand this, you could see the tables.
This expansion of seeing the tables in a package, I believe, is, again, originally in 11.1, you couldn’t really see this, unless you actually try to link to it.
I believe, I think it was 4 or 5, they went over here, and allowed you to kind of expand the namespace folders and actually see the tables.
But, the question is, you don’t really see those tables over here in the Relationships view, but maybe in this particular use case, I don’t need all this stuff.
But I do need, for example, I need to have the, the retailers, I need this Retailers’ table, for example. How would I represent that and bring that into the data module?
Well, one method is to simply create, through custom tables, a link to that, or views, or that, OK, so what I would do, let’s go to the top here and say Create Table.
From that Go Cyr sales query package, find the table that I’m looking for.
There is retailers and say, finished.
What it does is, it creates this view, and I need to rename that for clarity.
Now if we just type in retailers, it will bark at us a little bit and say, well, you already got a retailer’s table, and they’re because it’s already referenced in that package.
So you can we can name it whatever we want. You could just call it retailer, view or retailer.
I just call it retailer and say, OK, now I have a retailer View to the package, OK, And I’m going to move the goal sales query.
Package the physical layer so people don’t see all that mess.
OK, and then if I do have the retailer name or code, I can create that relationship from here to, for example, to sales fact, and I could join the two together.
So, what’s, what’s happening there?
You’ve got a hybrid environment, there, you’ve got, you’ve got tables joining across each other, so that’s one way to do it.
Let’s go ahead and save that.
And another way, which I think, is kind of leading into datasets.
Go ahead and save it.
I will note that I see this issue sometimes where we say Save, and it doesn’t say it’s saved. I have to click around to save.
And you might see that little Basil.
If you’ve been using this product a little bit, I’ve kind of noticed that I’ve seen that across different implementations. So I think it might depend on the version release and et cetera. Just a little cop side, comment on that.
So what I want to do is actually go back and actually create a dataset, OK. I want to create a dataset, and then bring that in.
So, if I go over here, let’s go back to the package. Remember, we went to this package over here called Ghost Sales Query. Now, what I did was I brought the package in LinkedIn into the module.
But, in a lead up to where we’re talking about datasets, maybe it’s a very small table.
OK? And what will be better, potentially, is creating a dataset off this.
Because all I want to bring in, in this particular example, is maybe the branch.
So, over here.
I have my branch table.
Will actually say, Products, Products will leave that one sub branch table over here.
So I just simply have to bring this branch table N and I will make note that in this particular release, the dataset space for at least seven editor.
The dataset too, has been significantly change in that you can actually go in here and drop into the Query’s Painter.
So what that means, before, the Dataset Editor only basically allow you to drag into tables or columns, and add filters to that.
Now, I could actually basically create multiple queries. I could create joins and unions, and things like that.
And they have very complex dataset extraction rules, which I can now bring into my data modules.
They basically took the query painter editor out of the reporting tool and dropped it into the dataset editor, which is very powerful. Now before, we only have the ability to kind of go in here, drag things, create some filters, and stuff like that.
So this isn’t very much improved. So basically, what I’m doing here is I want to go ahead and create this dataset. I’m going to say Save and load data.
And I’m going to go over to place it in that temp folder ahead. This is called branch, for example.
We’re going to call it dataset.
Like say, C is loading over here, and another feature release of release seven, is that this loading time?
And this is a very small dataset, so it’s not like perfect example. But the loading time of datasets, the creation of them, have been very much optimized.
And, I know, from a real-world scenario, where maybe a dataset to three minutes now only takes little over a minute, they’ve taken a lot of that technology and took it away, maybe, from the report service process, and put it more to the query servers. So you have a lot more bandwidth on that.
So, the creation of these datasets are much quicker.
So, now that I’ve created that dataset, I’m going to go here to my content.
OK, I have the dataset over here.
And I want to add that in. So I’m going to go my LinkedIn module over here.
I’m sorry, I need to go to the one over here, the data module, Hybrid one.
OK, so here’s our retailer that I got from Go to sales Query.
I want to go ahead and add the data set in order, had use sources.
I’m going to go to Architecture the webinar, tamped, there’s my branch dataset, and bring it in.
So, now I have my dataset here.
I have branch dataset, and then I could actually drive, connect that over here to my sales fact this together.
OK, I’m going to go back to my diagram, PowerPoint.
So that allows us to leverage a couple of things here. So, we’ve got a nice hybrid model, where, maybe, this is my end user data model, or it could be a data model that was created by somebody else.
And I’ve pointed to an FM package, I brought that in.
I’ve used the same FM package to create a dataset and brought that in as well, allowing end users to have much more self-service capabilities than they were used to before.
Finally, end user driven is, really, uh, people taking the modeling aspect of Cognos into their own hands for complete power and control IT maintains the source databases or, the data warehouses there are over here.
Users will then utilize the Cognos self-serving, Model self-service Modeling components to model the database for their specific requirements.
Now, there’s a lot of people out there that I’ve worked with where maybe that’s not necessarily an ERP, but in large corporations are probably a lot of application databases out there.
That people in the business units are actually connecting directly and writing SQL queries against it.
Because, they know those tables really well.
They know those tables better than IT, but because Cognos has to be modeled by IT, they don’t want to go and get into a project with them.
They just point directly to the databases themselves, or they’ve taken Power BI and Tableau and pointed directly to the database themselves.
So, this is a very well suited example for an appointment of data.
I’ll take care of the rest, because I know my data, OK. Then they could extract subsets of data from the source databases via the datasets technology that I was telling you about, and integrate them into the data modules.
And then, users can then go ahead and upload their own privately maintained data in the form of spreadsheets into that data module.
I will say that one of the use cases, offshoots of this, from an end user perspective, is actually having data modules that’s completely consists of only uploaded Excel files.
Because there are a lot of use cases out there where end users, business units have created very complex processes that extract data into spreadsheets. And they have macros or other post-processing that create these beautiful Excel files with good data that’s already been kind of prepped.
What they do with it is eventually use it in Excel to create graphs.
Maybe they bring that in Tableau, well, with this, we’ve seen use cases where these uploaded files, Excel files, can be the primary source of data in the data module, where we add even more cleansing to it.
And then, that data module then becomes a source of data dashboards and reports, and very good, useful information, because the data warehouse, maybe might not have that data.
This is, again, a proprietary process that’s in your organization that people have spent many years creating to produce these Excel files.
Well, they can now just bring that in, from an end user perspective, into a data module, and create dashboards themselves.
So what I want to talk about real quick is the concept of data set libraries.
So we did it create some datasets here in the presentation, but we can start thinking of datasets in terms of a library perspective.
For use, end user are IT defined data modules.
So there’s a lot of data out there typically subject specific dimensions or referential data, or even summarize fact data that are in databases.
But we can actually leverage the, the dataset, technology, and create ly raise Common Data Service library, stored, in the team content store.
Again, it’s not meant to storage, large amounts of data.
Typically, maybe a dataset is good for like 10 to 15 million rows max, OK?
But there could be different subsets of data.
You can, partition from different dimensions, slices a dimension, the regional, that we can take out of the databases and store in a team content for better performance, because then these queries don’t have to run against the databases every time.
And this could really increase your performance on certain reports. Make sure dashboards obviously run a lot quicker.
We can store them in common folder structure.
So for example, over here on the home folder, I’ve created a concept of Dataset Library, OK?
So we’ve got this is simplest that simplistic one. But maybe in team content, we have different subject areas.
We can create a dimensions folder, and this is where I have my branch dataset, facts, et cetera.
So what we can do is just simply go.
We could even use our, uh, perusal modular IT and create a dataset off of this.
And we could leverage maybe the product dimension and just drop the product dimension over here, say, Save and load data.
And we want to put it in our dataset library, and dimensions, and we’re going to call it the product dimension.
Say it’s loading the data.
OK, and that could be a large table, so we have to do some analysis analytics on our own.
And over here in our Dataset Library, now, we have the dimensions, are product dimension, which we can then maybe just even take out the references for the product dimension in that. My data model I showed you, and just use this.
And then, if we look at the properties of a dataset, we create our schedules, we query schedules, and when it’s going to, it’s going to get refreshed.
Over here in the advanced area, we have things, like the number of rows and time to refresh, et cetera.
OK, so, I just wanted to finish with that, as part of the presentation.
And, again, the whole thought here is, I think a lot of people are really focused on what data modules are, what data models do, all the different features of data modules, which we have covered in our webinars out there. We have our webinars are out there called framework manager versus data modules. We have relative time and data modules. We have a webinar for self-service, kind of touching on some of these concepts.
Um, but, no, definitely not the only ones out there that you could use, but it’s just really trying to spawn some thought to get a waiver, maybe from the weeds with all the different things. That data modules are able to pull it back a little bit.
And really think about, if you’re trying to get the self-service, how we could use these better technologies to use that.
And not completely rely on trying to reverse engineer all your FM models.
And trying to create a better environment for self-service reporting for the end users and giving more power or data democratization to your end users for self-service.
I’ll take it back to Mike.
Thanks, Pedro. And, yeah, we got a lot of great questions. So stick around for the Q and A, and get your questions in.
If you’re looking to make the shift to Cognos self-service, if you want to build that slide out, Pedro, whether it’s the gamut from architectural changes all the way through to your adoption and training, we can help you.
This is what we, we do every day, and you can get a roadmap to self-service success, help you develop that.
You can contact us, either via the chat or email us at email@example.com.
Couple of quick slides about Senturus, before we get into the Q&A, We are the authority of Business Intelligence. We concentrate our expertise on business intelligence solely with the depth of knowledge across the entire BI stack.
We are known for providing clarity from the chaos of complex business requirements, disparate data sources, and constantly chart targets, and changing regulatory environments, made a name for ourselves because of our strike: bridging the gap between IT and business users.
We deliver solutions that give you access to reliable analysis, spreading data across your organization, so you can quickly and easily can answer some of the impact in the form of the decisions you make, and the actions you take.
We are consultants. Like Pedro, we are leading experts in the field of analytics.
With years of pragmatic, real-world expertise and experience advancing the state-of-the-art, we’re so confident in our team, and our methodology, that we back our projects.
That is unique in the industry.
We’ve been doing this for quite a while. We’ve been focused exclusively on business intelligence, or over a decade.
We work across the spectrum from Fortune 500 to mid-market following functional areas, including the office of finance, sales and marketing, manufacturing, operations, HR, and IT.
Our team is both large enough to meet all of your business analytics needs, yet small enough to provide personalized attention.
If you’re interested in joining the Senturus team, we are hiring talented and experienced professionals.
We’re looking specifically for Cognos BI Architect Project Manager types, as well as Senior Microsoft BI Architects and Senior Tableau Report developers, as well as ETL developers.
You can see the job descriptions at the link below, and send your resume to jobs@Senturus.com if you’re interested.
You can find more resources on the Senturus website at…
senturus.com/senturus-resources/ and bookmark that, because that’s got all kinds of great information, including upcoming events like the two that I’ll show you here.
We’ve got the secrets of high performing reports, and this is really technology agnostic.
That’s coming up and couple of weeks at the usual time.
Actually, I take that back. It’s not at an unusual time.
It’s at 1 PM pacific time, because our presenter is from Australia. If you’re down under, you can join us on Friday at 8 AM.
And on march 11th, we’re featuring Snowflake versus Synapse will be comparing Snowflake versus the Azure Synapse analytics data clouds and check back soon because in April, at a time TBD, Pedro mentioned Cognos 11 to coming out, and we will have a Cognos Product Manager going through the new features that exciting and a long awaited release.
Then, just a couple of more here, before we get into Q&A. I’d be remiss if we didn’t bring up our complete training capabilities across the three major platform, support, Power BI, Tableau, and Cognos.
We are ideal for organizations that run multiple of those platforms, or are moving from one to the other, and we feature all of the modes of training.
Sessions, mentoring, instructor led online, self-paced e-learning can provide training in many modes and mix and match those to suit the needs of your particular user community.
And finally, before we get to the Q&A here, again, last chance to get your questions into the Question pane, and we’ll jump to those real soon.
We provide hundreds of free resources, like this webinar, and our blog on our website. We’ve been committed to sharing our expertise, or over a decade at this point.
And with that, we’ll jump over to the Q and A We have quite a few questions, Pedro.
Um, let’s see.
There was a question about your thoughts on, connecting on, on dealing with multiple fact tables to the same dimension table, or having multiple dimension tables.
So I think you’re saying, let’s say I have five fact tables and three dim tables for out of the five fact tables, Connector three, table. So I think he’s talking about, you know, how would you handle more complex modeling scenarios?
Yeah, it’s short and it’s definitely we can do that.
Also just another side. IBM has come up with a new modeling guide, especially kind of focused on data modules itself. One data modeling guide, where it kind of talks about some of those questions.
And in our other webinar, FM versus D M, I do talk about multi fact tables. And one of the issues there is, of course, granularity.
One might be at the day level, one might be at the month level, while there are our new features and data modules, column Dependencies, A column, which is basically determinants. And I think, you know, I’ve been trying to go into that area where you have determinants. And how do you do that?
And data module, that particular feature has been ported to data modules, so multiple fact tables, Multiple dimension tables, basically modeling a star schema is supported in data modules, you know, so that that, that can be done, Definitely.
I would say, though, that and correct me if I’m wrong, or Pedro, but, in general data module or a market response to the Tableau is in the Power BI engine.
And typically, those datasets tend to be those models tend to be a little less complex and comprehensive. You’re certainly not going to go in.
It’s kind of a move away from the very complex and comprehensive framework manager models into things that are more bite size or use K for your use case specific.
Like I’ve mentioned in the webinar, I personally wouldn’t recommend taking an FM model that has been around for a long time with, you know, 500 tables and complex namespaces enjoins and really trying to mimic that whole thing into a data module.
Think about what your business use cases, and where you can actually leverage some components of that maybe, and make it better and data models, because you could add new features and data models that don’t exist. And frame a framework manager. For example, relative time, things that transformer did very well.
You can now integrate that into a data module and have year, over year, year to date, and month to date.
With less complexity, that framework manager never really supported well in natively, and they won’t put that in there, for sure. They won’t put it in there. They put it in data models, but you want to leverage that new technology.
You want to leverage other things that are in their data grouping, that all these things that I could talk I talked about in the other webinar.
But then you look at the, your FM model. That’s very complex.
Take a chunk of it, or take a subject area that some, some people would like to use, and look at it that way.
Yeah, it’s definitely interesting, because Power BI provides some fairly robust transformation modeling capabilities.
But Tableau only, in the last, say, six months, added relationships.
Where you could actually, instead of having to model all your data as a single flat table, even if we’re combining multiple tables, you’re now only now able to sort of model.
I have an abstraction layer there, which is really kind of interesting.
Just exactly of the differences there.
Someone was asking, when you’re pointing to data, users in their environment can see framework managers, framework manager models but within modules Cognos cannot see tables in the data warehouse?
Is there something that they’re missing there, and I think that’s, there’s a setting, right, where you have to, you have to allow the exposure of those things to data module modeling? Correct.
So, we appointed data module to a FM package.
It’s kind of like I showed you the package kind of comes in, You can see the whole thing expand on the left, but you don’t see it in the, In the diagram. When, the question about, I don’t see all the database tables in the data warehouse.
That could be more of a setting item, like you mentioned, where you basically want to create a data server connection to the data warehouse.
One of the aspects of that, they don’t see that, as part of the setting.
You do have to refresh the metadata and pick the tables you want, in that data server connection in order for the data modules, to be able to see the tables in a data warehouse.
So, sometimes, people will create the connection, the data server connection, the data warehouse said, don’t do that next step.
To refresh the metadata, show it, and they don’t see the tables.
Yeah, so there’s that last step that people forget about, So. Yeah.
Yeah, which is interesting that you’d have to do that manually. But, yeah, it’s kind of buried in there, and you have to go find it, but refresh the metadata and hopefully be able to see them.
There’s a scenario here where?
Person is asking whether the user has read only access, if they would, if they copy the data module, or my folders and make changes to the joins, is there an ability to limit the copying, or otherwise secure that?
Well, if the, ultimately, users can do whatever they want. In a sense. They can break those links, and then they don’t have any more links back to the to the read only data module, right.
And people can still copy.
They can copy the source data module over there, um, but, trying to think of them to be only be read only in a sense, but there’s a lot of ways.
It’s not completely foolproof because if you, if they do have a connection back to the database, they could do that themselves, too.
Um, so, I would say that people would just, the read only aspect of it kind of stays there. I have to double check, whether if you actually copied back to your my folders, or whether that actually changes to.
Yeah, that’s fine.
There’s a question about how many FM packages can be combined in a data module?
Is that something now, that’s kind of an esoteric question?
Well, yeah, so you can continually keep adding a new FM package as a source.
I’ve done 4 or 5, I’m sure you can keep doing it, right.
But I think the key thing is how you kind of expose that, those packages in a very logical manner in the data module.
So I showed you a concept where I added a package, and I just created a view to that and put it in kind of like a presentation layer.
Um, So I don’t think you’ll want to just bring in packages and just expose the whole thing. Because you basically have mimic kind of the mess of the package.
So, yeah, you could bring them in, but I think there’s, there’s some work that has to be done the presentation layer to make, make it useful. Exactly, there’s your, that’s the key point is they’re going to be usable.
When you’re combining an FM package of the data module, does the FM package name to be built in EQM, or can it be CQL?
I believe it has to be DQ up.
… is really going away. I think everything around data models is DQ, and it has to be the QM. So I believe those up. Yeah. Those up and packages at the BQE.
If you create a Hybrid Data module with query subjects that have object based security, well, that security be preserved.
Object based security hybrid, so you’re saying that there are Yeah. Whatever. You bring on, our famous going to.
Yeah, if, if the user can’t see that module based on security, yes, it should inherit that as well.
So, it will check that, it will inherit the FM properties of it.
Security, properties of that.
Google, it’s going to run through FM, is what it’s going to do.
It’s going to run through the package.
Yeah, that’s what I figured. Makes sense.
So, it Dataset libraries. Millimeter, hmm. Now, how you would save different datasets, say, for different departments. Yes.
Dataset, it saved under the same name, IE, the security is based on Department.
Sorry, I’m kind of reading the question, OK.
Um, so, ultimately, datasets are basically files, you know, their files stored on Cognos.
And if you don’t want certain people to see certain datasets at the Team Content Folder Structure level, that’s where you kind of maintain your security, right.
So, if I’m creating a regional product dimension, and I create that in that particular regions folder, data set library for that region, you put the security it there, So they, they can’t see it no, from an object based precept perspective.
So that’s all really around team content folder.
Do we know if data module are supported using cafe?
I haven’t tried that.
No, I don’t know. Cafe. Interesting. Yeah.
There’s a question about what kind of ram head to the server take when using dataset? I imagine that a lot to do with the size of the dataset.
Yes, it does. So, the way datasets work, its round robin, in the sense that the first person who hits the dataset, it’s kind of hidden the file, but then that dataset gets thrown in a memory.
And it stays in memory to memory is full, and another data set comes in, and it has to use of space.
It will depend on the size, right?
There are Service Settings which allow you if you have to do it in the Admin Console.
I think the default is eight gigs or maybe four gigs.
But then you can increase. Say you have a large serve, a lot of memory. You can increase the amount of memory that’s used for datasets in the admin settings, I don’t have the setting on top of my head, but you can control that and allow more memory and your servers for datasets.
Good to know.
We have a client era attendee saying that they have already they’ve been using data models, where they combine various sources to create data modules and then creating datasets out of those to improve performance. Yes, and then bringing that back into the data module. Yes, that’s great.
From a self-service perspective, I imagine they’ve been seeing issues when trying to combine package and other datasets into data modules, other shortcomings that you’re available, the era aware of, about using Packages and Data Module.
So the …, like I showed you before, I’ve actually join across, you know, I might have a date dimension, or some dimension that’s in the package is going to a different database, and join it to a dataset that’s in my data module.
I’m sure there are probably some scenarios out there, there might be some issues.
But for what I’ve seen so far and use, I have not seen too many.
I mean, if you’ve got a very big table that you’re linked to a package and you’re limited by the performance of the database, you’re still going basically through F them, packages to the query service, back to the database and then bring in waiting for the day to come back in and then joining that maybe to a dataset in Cognos that’s going into modules.
So, you’re lowest Do Not Cut, kind of nominators. That is that connection back to the database.
You know, adding to the comment on what the person made about Dataset Libraries creating a days.
This is where we had to create a data Module to create a Dataset Library to bring the data set back into the convoluted because you’ve created some will relationships in the module in order to create the dataset you want and then create, and then putting that data set back into the same Data Module.
I believe that that particular issue we’ll be cured now with the new Release seven dataset editor.
I showed it a little bit.
But because you can now control the queries of the creation of the dataset, you can go back to the package create, like I said, you can create your union, joins your filters, complex filtering.
Which maybe that’s what you’re doing is using the data module for. I haven’t had to use, like, call staging data modules to create the datasets that I really want.
That’s now buried and it’s now embedded into the dataset editor, which makes it a lot easier to use, and lastly, less headache.
So that whole staging data module, intermediate step, can now be potentially replaced by the dataset editor completely.
Interesting. Yeah, then coming back to living back to cafe really quick.
That’s been deprecated, is obsolete now, and it’s only included in planning analytics, and it’s called Planning Analytics for Excel or Packs. So just so you’re aware of that.
Now, it is past the top of the hour, so I want to be respectful to everybody here.
And we, there are a bunch of questions that we didn’t get to, so we will complete a response document and post that to our website. It’ll be right there. Along with the recording, which will show up in a few weeks, and the data that’s already up there.
Now, we thank you all for your time today. Pedro, Thank you for another excellent presentation.
And well, if we can, if you want to go to the last slide there, Pedro, please do reach out to us if you have any analytics needs. If you’re still use a phone, we’ve got a triple eight number. Go to our website, or you can always e-mail us at firstname.lastname@example.org. So thank you very much for your time today, everyone.
And we look forward to seeing you on a future Senturus event.