Users are demanding self-service reporting and data discovery. Your Framework Manager model does not support fast ad hoc reporting—but it has a ton of metadata and business logic in it. You heard data modules and data sets in Cognos Analytics are IBM’s response to Tableau and Power BI. Which way to turn?
A hybrid approach! Use data modules and data sets to leverage your existing Framework Manager model structures. The result is a nimble ad hoc reporting environment where users can drag and drop the measures they need and reports run fast. And your company continues to benefit from all that hard work you did to build out FM.
In this on-demand webinar, learn about a real-life application that used Cognos data modules and data sets to leverage existing Framework Manager models. Senturus consultant Pedro Ining discusses work he did for a client who wanted to create a high-performance ad hoc reporting environment. He’ll show how a hybrid DM and FM architecture freed users from having to remodel data in Excel.
See demos to learn
- The business use case and resulting data module architecture chosen
- How Cognos data sets were designed and created
- The modeling techniques and challenges
- Various data module features that were used
- Relative Time
- Customizing the Relative Time calcs
- Creating custom calcs like COUNT DISTINCTS, CUMES
- Managing the metadata naming conventions
- Use of COLUMN DEPENDENCIES in data modules
- Issues and problems discovered
Principle BI Analytics Architect
Pedro is a veteran data architect. His skills evolved with (kept pace as) the industry through multiple iterations of BI platforms including Cognos, MicroStrategy and Tableau.Read more
Greetings everyone and welcome to the latest installment of the Senturus Knowledge series. Today, we’re pleased to be presenting to you on the topic of Cognos Data Modules and Framework Manager, how to combine those for high performance reporting.
Before we get into the presentation, a few housekeeping items.
Please feel free to use the GoToWebinar control panel to make this session interactive while we have the microphones muted out of courtesy to our presenters.
We strongly encourage you to submit questions, as indicated on the slide here, in the question panel. And while we’re generally able to respond to questions while the webinars and progress, we usually wait until the end of the webinar. So, please stick around for that. That’s always invaluable. And if, for some reason, if we’re not able to answer all question due to time constraints, we will post the answers on Senturus.com along with the presentation.
So, the next slide here, people always ask us, can I get a copy of the presentation? The answer is an unqualified. Yes.
It’s available on Senturus.com, if you select the Resources tab and then go to the Knowledge Center.
Alternatively, you can click the link that has been already posted in the GoToWebinar control panel.
And while you’re over there, make sure you bookmark the Knowledge Center as it has tons of valuable content addressing a wide variety of business analytics topics.
Our agenda today, after some brief introductions, we’ll get into a data module overview, and some use cases.
We’ll look at hybrid mode and real life use case that we implemented here at Senturus for a client, look at the requirements, as well as design components.
And then Pedro will be doing a demo of the features of that hybrid model use case.
And again, stick around after we do a real quick Senturus overview, for those of you who may not be familiar with what we do here at Senturus, and some great additional, pretty much entirely free resources.
And then we get to the aforementioned Q and A so introductions. Today, I’m pleased to be joined by my Compadre, Pedro Ining.
Pedro has been with Senturus since 2010 and has over 20 years of BI and data warehousing experience.
He has been instrumental in implementing data warehousing systems from scratch and has experienced the evolution of the BI industry through several iterations of BI projects.
Products included Cognos, MicroStrategy, and Tableau.
My name is Michael Weinhauer, I’m a director here at Senturus, and among my roles, I have the pleasure of emceeing our Knowledge series events.
And, in the tradition of our knowledge series events, we always like to get an idea of what’s going on with our audience, and, in that vein, I’m going to put up our first poll, which is, what BI platforms do you use for self-service ad hoc reporting and dashboarding, and, please, select all that applies using Cognos, Tableau, Power BI, something else?
Or, you’re not doing self-service reporting, so, go ahead. Democracy in action.
Get your votes in here.
You must have had your coffee this morning, because you’re getting those votes in pretty quickly. We’re already at about three quarters.
A few more seconds to get those in.
All right, I’m going to close this out and share the results so, the preponderance here using Cognos.
But, one third of you are using Tableau and Power BI. And about 10% doing something else in a very small subset not doing self-service.
So, that’s great to see that there seems to be a lot of self-service out there and interesting about lots of Cognos.
Hopefully, what we show you today will be something that allows you to further expand self-service using Cognos in your organization.
So, our second poll: this is a single select: how widely employed are data modules within your organization?
These answers are most widely, somewhat not used at all, or, you don’t know.
So if you don’t know, it’s OK.
A few more seconds. We’ve got about three quarters of you voting.
OK, I’ll show that back.
Good bell curve there. Small percentage widely. About half somewhat.
Third, not used at all. 6% do not know. Alright. Thank you.
We always find that interesting to get the pulse of folks that are attending our webinars. And with that, I’m going to hand the floor and the microphone over to mister Pedro Ining, Pedro the floor’s yours.
All right. Thank you, Mike.
Well, I’ve done a lot of data module type webinars, and we’ve also been able to apply some of that knowledge with some of our clients, but, for those of you who don’t know anything about data modules and datasets within the Cognos environment, let’s go through a quick definition of what these things are.
Especially, for our folks out there, just really using Cognos Analytics or using Cognos 10, for just reporting with Framework Manager packages and just doing reports and distributing reports.
So, the data module and dataset features and functionality in Cognos 11 came in Cognos 11 as a web-based end user focused data blending modeling.
Transformation tool debuted in Cognos 11.
And I think, when this first came out, a lot of questions are like, oh, does it replace Framework Manager? Maybe, maybe not.
What are the differences between the two, so a lot of these questions started coming out, and we’ve done several webinars on these things.
And this particular functionality within the tool suite now is really IBM’s response to having more data democratization, much like other tools like Tableau and Power BI.
Where those tools will say, give me the data, our model, that you model it.
Fact, if you open Tableau right away from scratch? It’s going to ask you, point me to the data. And, Power BI wants some data point.
All the previous versions of Cognos let IT do the work. They’re better with that, and then, we’ll just serve up the data and then, you can write reports against that.
Cognos had to really respond to the way the new tools support data democratization.
And, with the release of 11.1 and its several releases it started closing some of the technical gaps between Framework Manager and data models.
And with 11.2 out, IBM continues that with several bug fixes and the interface improvements within the actual 11.2 release.
Another thing to make note of it, with terms of Framework Manager versus data modules in all future development resources with IBM will be focused on the data module component of Cognos Analytics for enhancements. Framework Manager will be there
it’ll be like Cognos Transformer and will not be deprecated at all.
And, we’re going to show you actually in this webinar how we can leverage some of that within this particular use case.
So, datasets are basically a feature within data modules that allow us to create summarized data subsets that are stored in Cognos servers. And that’s a key component of what we want to show today.
So, we’ve been doing several similar projects over the last year at our client sites, and just from my personal experience, there’s been a lot of interests now. The 11 series of products have been out for three years.
And we’ve seen a lot of different use cases.
And people really are trying to figure out what to do with this stuff. And some of the things we’ve seen are people want to create fast and want to create dashboards.
Our reports are slow.
We’ve seen cases where they quickly have created datasets and put them into data modules, and we were able to create fast dashboards. Even reports, we’ve seen use cases to where the slow reports, which have been hidden, had been hitting the back-end databases are always slow to run. But, they’ve been able to extract certain pieces of the reports and all the data for the reports into a data module,
raw data set and a dataset and make those reports run faster, Kind of one-off scenarios to use that.
We actually did a project where we did a POC as a replacement for Transformer. We did about 75% of the functionality of Transformer for the client. It was kind of mulling over the benefits of doing that. And we’re able to mimic some of the features of the Transformer world.
I think one of the biggest issues with Transformer right now in the Cognos world is that people still love that Analysis Studio interface, instead of the reporting interface. But, soon, that will have to be transferred over into the new interface and Cognos 11, so, we wanted a POC for that.
And, it was a lot of downstream manual Excel processing that where people use Cognos to extract data and create multiple files in Excel.
Well, they’ve been able to push that back up into Cognos by instead of extracting report files or text files that have been able to extract data sets and do some of that manual Excel processing within the data modules.
But, one of the more interesting projects that we’ve done recently is actually a client who wanted to leverage existing FM model metadata to create datasets and to summarize subjects, specific analytic layers form better ad hoc reporting and dashboard creation.
In this one, the client itself had a very good FM model implementation of their data warehouse.
So, we found this use case to be interesting and one of the webinars I did talked about different architectures, called
architectural use cases for data modules and datasets. And there was one where the data model was pointing right back to the database, and you did all your modeling raw against the database tables.
And there was one I call a hybrid model, where the FM model was actually pretty good, was against the data warehouse. All the business logic was there. All the naming of the fields were there, all the joins were there.
But in this case, unlike a lot of FM models, the FM model was very granular.
In this particular case, it’s order line by day by SKU.
So much of the reporting in that data package was good for operational reporting.
But they actually wanted a simpler, faster performing layer for ad hoc reporting and dashboard creation.
So, if they had to go against the package, the biggest complaint was as slow as too much detail, and even, send me a better package is, might not be well structured for ad hoc performance.
And a lot of career audit calculations were not maybe stored in there, so it was a great case.
And if we look at how we created the hybrid model on this case, this diagram is showing what we did for this particular use case.
So on the upper left here, we have our databases. And when is it could be a data warehouse it could be the OLTP transactional system. And generally, what happens over time is a lot of FM packages were going to be using a dataset.
Here for the webinar from Microsoft Wide World Importers, data warehouse.
These packages were modeled against that data warehouse or even an OLTP system and they’re generally maintained by IT.
You can call them legacy, but it has really solid metadata. All the data mappings are already done there.
You don’t have to re-invent the wheel, you don’t have to point the data module back to the databases and try to figure things out.
OK, so, what you can do in this model is extract datasets from those packages, leverage that, it could be dimensional data sets, like I’m showing here, and also factual datasets.
And then, you could create and put those datasets into subject areas, specific data modules.
Then, what we can do is also integrate and take the uploaded files of our analysts, things like third party data, budget data, Excel files, and also integrate that into our data module.
And, these subject areas, specific areas, could be things like for sales, it could be for supply chain analysis, OK? It could be for any different areas, and it’ll be performing off of datasets, which will be a higher performing data module area for people to query against.
So, the common use cases for this hybrid model, Um, it was going to satisfy specific requirements by the end users for large, legacy Cognos organizations with a lot of packages. They’re very stable. They have a lot of data integrity, and they had the blessing of the central IT organization.
For this use case, we wanted to create a couple self-service analytic layers.They wanted an analytics layer for supply chain.
So, they had data lag from supply’s, sales forecasts, budget.
At the skew level, maybe not at the skew level, they wanted all those.
Essentially facts in the same area, and to be able to kind of drag and drop and create dashboards and reports.
And very easy to use the self-service feature.
They also want an analytic layer for just sales, but maybe at a higher level, for a month level, Also, maybe at a day level, but maybe only at a customer ship to so varying grains of data, not mimicking the very granular data back in the package was the definition of these analytic layers. And of course, we’re going to utilize that FM model as a source for the datasets and integrate everything.
An external file in the data module itself.
Part of the use case requires that they really needed to have done, which is kind of difficult to do off a straight FM model, even in reporting. They needed dual year over year comparisons. They needed to do things like year to date, versus last year.
Month to date, quarter, today, The current months.
Now, the current quarter versus the prior month, all these kinds of comparisons, where we required to do, a, lot of you out there who had to write reports against FM models. How to do a lot coding within the report to kind of get to this?
So, they wanted that to be a requirement when we created these models, we’re going to show you how we did that.
We leverage the relative time calculations that come with the product.
But, one of the requirements they needed was to be able to compare, not just prior year,
but they wanted to compare two years prior.
And this is something that’s not coming out of the box, but we were able do what the customer wanted to with Cognos. We provided them not only two years, but three years and four years ago, with these relative time comparisons.
In today’s analytics, comparing sales to last year is not a good comparison. They might want to compare things to 2019. So, although Cognos has done a really good job with relative times, they can’t think of all these things upfront.
And we’re going to show you how to actually change the relative time clocks for that calculations like cumulative running tools. They want that built into the data module.
So, the short story here is they wanted as much calculations done in the data module, leveraging the summarized datasets coming off the Framework model, so that the analysts have to do more drag and dropping and not coding or calculation building off of that.
And, for this specific use case, here’s some of the design components that we came to. When we designed the datasets.
or an analytic environment or summarized environment, you need to really think about what you’re doing and narrow your data set definition, because we’re not going to bring all the rows back from your operational system.
In this example, the FM package had order detail which contain about maybe 50 million rows of data, but what they really wanted to do was create several datasets, which were summarized for specific levels of granularity.
For example, they wanted sales order data for the last two years, plus the current year.
But they only needed at the month level and the SKU level, and this was done for maybe supply chain analysis, who tends to live at the SKU level.
We were able to summarize that into a dataset file.
At about one point three million rows, it’s only nine megabytes in size from a disk based size or memory footprint size for the dataset.
Dataset number two, they wanted two years plus current year, but this woman was going to be at the day level, but at the product line level. So, they took away SKU. And this was for analytics, not necessarily supply chain.
But for the people on the sales area, market, they didn’t need to know what a particular SKU did. For the last couple of years, one million rows was able to shrink that down, too.
And then there were the analytics they want to do across five years.
Maybe even at a higher level, at months.
Customer, product line, one point eight million rows, 16.2 megs, so those numbers are not very large. And I think you might be surprised.
I’ve worked with some customers with very large backend data warehouses that after you went through the analytics requirements phase, and you came up with what you thought you needed for analytics.
And you actually did your dataset building, you might be surprised at how well those numbers shrink down to for what they need, Most people don’t need the billion rows in the data warehouse, they need a subset of it. And this is a great way to shrink that stuff down.
Like I mentioned, in terms of the design components, we did have to customize out of the box relative time to include fire prior years.
We also had to create custom calculations.
relative time, calculations, we’ll talk about that.
And, this is always evolving in terms of data modular best practices and how to structure things.
And I don’t think there’s even a cohesive, comprehensive document that recommends that. There’s some great documentation, IBM on how to do data modeling with data modules. But actual implementations on how you store your datasets, how you bring things together, is still evolving.
What I’m seeing in the industry, and one of the things we used was something called a dimensional data module library where we created a separate data module, just for dimensions. And we’ll show that how we integrated that into another data module.
So, for a demo, for this webinar, I’m going to show you some aspects of how this was implemented.
Now one of them, we’re going to show a dashboard that was created against the analytics layer to show you how easy it is now to actually do your analytics.
We’re going to review the practical use of a dimensional library data module.
And, then, we’re going to do a high overview, because some of the customization of relative time functionality, that one has a little bit of code in it.
And I’m writing a blog with much more detail, but it’s trying to give you a flavor, in a sense of how that particular piece works.
I’m going to tab over to our Cognos environment and what you’re seeing here, too, by the way, is the new Cognos 11.2 instance. And, I’ll tell you right off the bat, I like it. I think it’s a good implementation it is as much snappier. The icons are better as based off IBM’s carbon design philosophy, which maps to all the other tools out there, that they’re building on the web. And a bunch of other things have been added to it.
Think we have no other webinars, 11.2 out there, but from my personal take, I’ve gotten used to it already, and I think it’s a definite improvement over the interface.
Of 11.1, there’ll be some training and evolve, but I don’t think it’ll be that much of a shift over, OK, so, just want to let you know what environment are in it.
I’m going to go ahead and run this dashboard real quick, that I’ve created.
And one of the things you saw, as you can see right here, dashboards load much faster.
I mean, this is not a lot of visualizations, but I’ve had some dashboards that I’ve had a lot of visualizations, and the render time is definitely 50% faster. And as you can see here, that was pretty quick. And, of course, this dashboard is going against that data module. That analytics data module, that we created with dataset. So, nothing’s going against database.
And I really recommend, if you’re going to create dashboards, create them against datasets and data modules. If you can, you know, that would be my first reached, my first thing I’m going to ask. Can we do this? And datasets and data models are proper render time, especially if you interact with it. So one of the things I’m showing here is really on the top, you can see some of those little cards. Basically, KPIs. It’s actually using the KPI widget out of the dashboard tool. And this is what they really wanted to see. and they wanted to be able to do this easily. They wanted to see, for example, month to date year, over year, sales have, given by current date, in the month. If it’s August 12th, how am I doing?
August 12th through August of last year. And you can see here, they’re up about 11%. They wanted to see that right away.
Quarter to date, for the current quarter, year over year sales, and I wanted to see what that particular KPI is, and then we’re down about 0.2% were, but, even there, year to date sales.
Prior year, full sales. Also, prior year sales, two years ago.
It looks trivial up there, but if you were to opt to create that offer, just a simple, structure, which just has sales amount. And then, you have to give that to analysts to try to calculate that. Probably scratched their heads and take it down to Excel. Do some different.
So, the other thing I’m showing over here is another graph, which is showing monthly sales for the last four years.
And if we actually expand that, get a better look at it.
It shows you current year, prior year, two years ago, and three years ago, at a monthly perspective.
The green one here is the current year, and for this day, the, the data ended in May for the current year, is showing you the monthly sales. You can see how is comparing against prior year’s sales, showing you where you are on a month to month basis.
And we also have the last two years ago and three years ago.
And as we went through the requirements, they also wanted to be able to show this from a cumulative perspective.
We wanted to see, for February, that should be January plus February for March has to be January, February, and March.
And they wanted that to be also drag and drop.
So, if you look at the cumulative graph over here for the last four years, now we see the cumulative sales for the current year, cumulative sales for prior two years ago and three years ago, as you can see the cumulative sales for the current year’s flatlining here because we haven’t hit June.
Then here is your prior year, two years ago and three years ago kind of correlating with the current with the last year. But this is what they wanted to see.
And we wanted to be able to create this easily, OK?
So some of the requirements, So, this is some of that, and this one over here is just really showing me that I’m tying it out.
It’s just showing the fact that this is the full year sales for 2013.
If you look here, this data is actually showing current year, 2016.
So that was just a way for me to kind of tie things out.
Let’s go ahead and edit this dashboard and show the actual data module that is based on.
For the purposes of this webinar, we kept things kind of simple.
We’re just slicing things by date’s locations and products but if you look at the sales folder where I do have the ultimate sales fact table, we have other calculations in here.
And we have the sales table which has your basic measures. Generally, people are presented in FM package perspective Just they just they just presented with this on a particular amount, slice it by dimensions, but they want to be able to get to these more.
Yeah, complicated kind of measures, right?
Well with relative time when you embed relative time into the fact table, if you expand the measure here, you are not presented with the slices that IBM allows you to do.
OK, and then you have things like the prior your Sales, which we can drag here into the graph.
But then what we did, and for this webinar, we added prior relative time for two years ago and three years ago, so that the graph can be easily made.
So how does that look like from an interface perspective?
Let’s go to the tab and we’ll go here to visualizations.
We’re going to drag the line graph over here.
And we’re going to recreate that cumulative sales graph.
So, they wanted it by month on the X axis, and they wanted to see current ear prayer, et cetera, sales, from a cumulative perspective.
So, what we did was, we created some cumulative accounts, These cumulative costs are based on those relative time measures. So, if we dragging current year here.
You can see that being created will drag in another measure.
That’s your prior year.
Prior to years ago, and prior careers ago, Basically done beyond just formatting this graph.
Adding, changing the format of the measures, maybe be currency, which you can do over here.
Beneath it, we’re going to make the currency US dollars.
Then we’re done, right?
Quick, easy, drag and drop. This is what they want to do.
They want these measures available, OK.
So, how do we do this in the data module? We’re going to go ahead and drop into that right now.
I go back to my content.
I’m going to make comments. The icons are much better dashboards a lot tighter. I think the rendering of it is quick. They really tightened up some of the visuals.
So just an FYI on that as you kind of started kicking the tires 11.2, I’m going to go back to my content and over here I have the data module that it’s built with my data module and again, it’s a very simple data module.
And I’ll make a comment, too, that everything IBM does is perfect, right?
And, one thing that I noticed about 11.2, and I think on that last fixed back for Release seven was before, when you organize these objects and save your data module, it would stay in this particular location.
It looks like with the latest release, it’s not staying and that was a bug prior to the time they fixed it. So, I’m hoping they fix that just an FYI on this particular piece here.
And here we have sales, locations, products, and dates, OK, very simple, but if you look at sales, that likewise showed you, there is our relative time over here on the sales metric.
OK, and the reason why we have that is because on the metric here, on the Properties, I have a lookup reference back to dates over here.
Again, we have a webinar which kind of goes over this particular implementation, how to do this in depth.
But the actual calc is very simply if you look at this.
If you edit this calculation, it’s just a running total.
Really simple, but we’re leveraging the relative time slice or metric that was embedded to automatically after you implemented relative time.
So all I had to do here was, say, if I want the running total for the current year, sales amount, I just go and drag over current year sales into here.
OK, Then, same thing with prior year, if you look at this Edit calculation is the same thing, but I’m going over here to Sales Amount, and then, also, going ahead and dragging my prior year, the same thing for prior two years, and prior three years, we’re leveraging that for that particular calculation.
So, those are some of the calc, and the other aspect of this implementation, you know, Over here, we’ve got our sales fact dataset. Again, this is the dataset. We have all our datasets created, I’m going to go back to content over here.
So, we created a location dimension data set, a product dimension dataset. The sales fact dataset.
But if you look at this data module, the sources here show you mainly one fact dataset.
And what I’ve done is linked back to a dimensions library, which is a data module.
So, over here, I brought in dimensions from a dimensional library and brought them into this data module.
And you can see by this little green icon that it’s linked back over here. What’s the benefit of that?
So, if you are always creating data module, different data modules for different subject areas, there’s probably a good chance that some of the dimensionality is always going to be re-used. You might be creating a supply chain data module, but you’re always going to do it by products you’re going to do by locations.
If you always bring in that same dataset, and then change the names, and everything, all, back to the way your data module for sales was. It’s a lot of work, or repetitive work. As people in IT, we want to leverage things that we’ve already done.
So with this technique here, we added a dimensional library data module, which already has the dimensions in place and drag them over here. And any changes that happen over here will be reflected over here.
I’m going to close this one down and show you that dimensional library, Here’s my dimensions library.
And here’s where we actually bring in the datasets so that I’ve renamed him.
I’ve renamed and locations and if I go ahead and rename this particular field, for example, to State.
And then save that.
Then bring in my other data module.
As you can see, it inherited that.
And also, as an aside, when you work with data modules and you’re starting to renaming things, you generally don’t have to worry about it.
Unlike framework manager, where if you rename a column, you break reports.
Because if you look at the name here, unit price, for example, under properties, there’s a label.
And if you go under advanced, there’s an identifier.
You could this you could change this label of many, many, many times, and it won’t break your reports, because what the reports or dashboards are really referencing is the identifier.
If you change the identifier, then you will break reports.
So, they’ve kind of have a two layer part of it, as part of the best practice, you create your data module, make sure your identifiers are consistent.
The benefit of doing that is when you actually look at SQL from reports against data modules, you’ll know what it’s referencing. So, back to my original point.
Is now, I’ve leveraged dimensions library, then a different fact, I could bring other facts. I can add the dimensions library, and I can then bring the new dimensions over here, if I need to.
But this could be maybe an IT maintain dimensional library and they can show how it looks, they could put filters in there and things like that.
OK, that’s one of the use case that we evolved our best practices from, We used that because we were building not just one data modular, 2, 3, 4 data module, but they’re all leveraging the same dimensions. So, we learned from one of our clients. Finally,
let me talk about the relative timepiece, so the relative timepiece.
Here, under date, they have all these different filters as well.
Besides the measures, we added these two things, right?
And let me go back to my dimensional library.
For those of you who have, never implement a relative time, we have a webinar on relative time.
And this is going to kind of build on that.
So I suggest you kind of look at that if you get a little lost from what I’m showing you over here. But the dimensions library over here has dates, and it has something here called Gregorian
calendar, this is what drives the relative timepiece. We stay here in our dates dimension.
We are going to do a reference lookup here to our Gregorian calendar.
What Congress provides you in your content under team content.
If everything is installed, the way your administrators do it, kind of out of the box, there was a calendars folder.
And we have data modules for Gregorian calendars, for fiscal calendars.
They give you source files, tools, to create your own calendar.
Everything is in here.
And the way this works, if you look at the Gregorian calendar here.
Everything is kind of hidden, But the key source is this data module and is using the same technology that we’re using when we create other data modules.
It’s just importing a source called a CSV file, a gorgon calendar CSV file.
And if you actually report after that and bring it down to Excel, I’m going to bring it to Excel here.
It looks like this.
It’s just a bunch of rows.
And you can take this Excel. The first column is called … Date.
And then, the previous day, relative to the date, the next day, relative to the date, the year of the date, which is like the first year, like, just January second, but the years anchored at the first day of the year, issued prior year, OK, when you take this file down and you manipulate it, and then what we did, we added two columns to this spreadsheet, and called it two years and three years ago, prior, two years, prior three years.
All you have to do is just create a little simple Excel formula to make, this two years ago relative to that date and populate it with that all the way down, and also this one.
As you can see, current year, 19, 92 years ago, in 1988, 1980s, you create this file.
And then I’m going to go back to my content over here.
And uploaded to your cognizant environment.
Here’s that file.
And then what we could do, is just take a copy of the Gregorian calendar they gave you create your own.
And I created one here called Gregorian calendar demo.
This particular data module actually sources from that use CSV file that I uploaded.
And if you look at the data on the grid, it shows you here, the two years ago and three years ago, columns, now you have the column’s, it needs to create the calculations for relative time.
The final step, which I would say is a little bit more trickier, is the kind of copy what they did over here. Remember, they didn’t have the 2 year 3 year, but they had prior year.
If you actually edit this filter it kind of shows you in a macro substitution expression language how they did that.
And if you figure this out, you can copy this and create your own filter.
Now, this will take a little bit of work in terms of trial and error.
They are using a query value function, which if you’ve been knee deep and a framework manage, you have this as a macro substitution type of function. We have some, some expression of substitutions here.
But we’re going to write a blog on this, too.
I could get into the details, but as you can see here, it’s actually referencing that column two years ago, and prior year is creating a between dynamic between SQL syntax as you create those slices.
OK, now that I’ve done that, I’ve created my own.
Gorgon calendar, data module based on the one provided by IBM, then incorporated that into my dimensions library. So, that’s always there for use.
Then, incorporated that into that data module that was used.
four, the dashboard I showed, OK, so, it’s a use case. It’s a series of best practices, is a series of things that are kind of evolving.
But, ultimately, you want your end users to be able to create something like this fairly easily.
Somebody’s calc is kind of a theme that we’ve been doing for the last year with other data module type of Cognos webinars. We’ve talked a little bit about relative time. We talked about different architectures. We talked about the differences between FrameWork Manager and data modules or whether you should replace it. And this is actually a very interesting use case, because you’re kind of leveraging all the work. You did a framework managed, but you’re taking a piece of it. Then you’re creating new functionality with datasets and data modules to produce this in a much more easier manner.
OK, so let me go back to my slide deck here and see where we are.
So, in summary, kind of the things that I was saying, there’s, there’s a lot of features in the new releases. Level 1 to 11 to customers are really starting to comprehend what they are.
Well, how to use them, right, those use cases and best practices with the data module datasets are really evolving over time.
And one of the things that we’ve learned is things like, the relative time we can custom customize it, we can better, better architecture with potentially more data model library concept. And we can always still leverage existing framework, manage your infrastructure. That’s an important piece.
Because I think early on when data models came out, well, do we want to replace our, all our FM models? And that’s a lot of work, right? And there’s a lot of good FM models out there. But we could leverage it. We can create a new analytics architecture.
And then leveraged data models and datasets.
We’re creating micro marts, There’s data marts in the database world. And these are little micro marts. I mean, Tableau can do data extracts as well and Power BI.
But you actually now can create this data module in Cognos and they’re like, micro versions of your data warehouse and they leverage the in memory processing of Cognos instead of the spread matrts, instead of people taking things down to a spreadsheet and, basically, grading the same thing.
Relative time, can be a functionality that can be customized.
So, Senturus, as we’ve, as a company, as our consulting services have evolved over time with Cognos, we can definitely help you discover new ways to leverage your Cognos analytics and investment.
If you don’t want to really get into the complexities of trying to customize your relative time that out of the box or all those calculations, and have the best implemented data module with datasets, we’ve, we’ve done some several projects on that and have come up with some evolving and best practice practices, so, you want to evolve your investment beyond just simple reporting.
I think Cognos obviously does that very well, Great PDF. Perfectly pixel perfect reports always has, but this is new functionality.
That has evolved over time and is, I think really starting to gain some traction.
So, that’s the webinar. I’ll turn it back over to Mike.
They’ve been pretty mature with some of this stuff, as you can see, that you can really move the needle on self-service analytics.
And, frankly, I’m a little surprised that Cognos and IBM haven’t made more with tha exact feature.
Those relit that automatic creation of relative time buckets. That’s easily customizable. Because, that’s the first thing you need to do, and it’s so powerful to be able to do those comparisons. And, it’s very difficult to do.
If you don’t have something to help you do that, So, it’s real powerful functionality. So, thanks for that great information. Stick around everyone for the Q and A afterwards, just a couple of quick slides about Senturus here.
We’ve been doing these Knowledge series events here for well over a decade, but on our website, we’ve got all kinds of other great technical tips.
Our blog, we’ve got product demos, our upcoming events that you can register for like this, and more in depth, information on all things, business analytics.
And, specifically, if you go to the next slide, we have a couple of links here for specific Cognos data module resources.
There that you can again, you click on those links, or you can search for those titles, data module architecture, and use cases successfully, self-service analytics, and comparison of Framework manager versus data modules.
And I’m sure you did at least one, if not all of those, Pedro. So, you can hear his voice even more.
So as Pedro mentioned, our expertise focuses solely on modern BI with a depth of knowledge across the entire stack.
So we can help you accelerate the adoption and implementation of self-service analytics at an enterprise scale.
And our clients know us on the next slide for providing clarity from the chaos of complex business requirements, disparate data sources, and constantly moving targets, changing regulatory environments.
And we made a name for ourselves based on our strength: Bridging that gap between IT and the business.
We deliver solutions that give you access to reliable analysis, ready data across your organization, so you can quickly and easily get answers at the point of impact in the form of decisions you make, and the actions you take.
We, as mentioned, in a couple of slides earlier, we offer a full spectrum of BI services, are consultants, are leading experts in the field of analytics, with years of pragmatic, real-world expertise, and experience inventing the state-of-the-art.
I’ve been doing business analytics for over 25 years, all and that’s not unusual at all here, at Senturus.
In fact, we’re so confident, and our team, and our methodology that we back our projects with a 100% money back guarantee that’s unique in the industry.
We do offer a complete spectrum of BI training, All the different types of modes that you would expect. So, tailored, customized group sessions. Small group mentoring, instructor led online courses and self-paced learning.
We’re particularly ideal for organizations that are running multiple platforms or moving from one to the other. So, folks like Pedro speak all the different languages of Cognos and Power BI and Tableau.
And we can customize.
The training is such that it meets the needs of your specific organization.
And we’ve been doing this for a while. We have been focused on solely on business analytics for over 20 years.
At this point, we’ve worked across the spectrum from Fortune 500 down to mid-market companies, solving business problems across virtually every industry.
And many functional areas, including the office, finance, sales, and marketing, manufacturing, operations, HR, and IT, our team is both large enough to meet all of your needs, yet small enough to provide personalized attention.
If this sounds appealing to you and you think you might fit the mold, we are hiring.
We’re looking for you can see the titles there, everything from project managers to ETL developers and, and in-between.
So if you’re interested and looking at job descriptions, you can see the link there, or you can send. And you can send your resume to [email protected].
And with that, we come to the questions.
So I don’t know if you had a chance to look at that at all, Pedro, but there’s a question there about Is there a way to check which reports you use a particular data modules like?
We have an option in Cognos Framework manager to show object dependencies.
I would say, you’d probably have to look maybe more at the content audit database.
Oh, could we actually done some work in that where we actually doesn’t decodes and kind of extend it a little bit and think, Michael, you only have a product like that?
That we’re marketing to, or something like that, but, uh, yeah, I would use add database to kind of find out the relationship between the two, sure. Data module to package to report.
The audit database, information and then yeah, we can, we do have a product that, that helps us migrations and it sort of catalogs, all that information and provide you with combines it with audit database information and helps you sort of figure out. It’s like the decoder ring for all your content. If you’re if you’re doing migrations but it’s a little different than it looks like this is more of an operational type thing.
Someone was asking about the QM fact using the running total up the sales.
In fact, need to ensure that the data was sorted ascending by date, and so how do you specify that sort order?
So ranked told, obviously, are not completely perfect. The graph, add the month there, the month was already sorted because of the data module for an attribute.
I can say, for example, a short label like month can be sorted by a different column, which is a month number.
So then, when I put the two together in the graph, she saw it when, January, through December, properly, and then it took the data, and added it up that way. In the graph. Right. So that’s how it actually came up the sales facts, sorted correctly in the particular implementation of that visualization.
So you started at the, at the visualization level, data module level, the label from month lasorda, and data module.
So every time dragging the label months, it’s always going to go January through December, not alphabetically, because behind the scenes, you can, as a property of an attribute, you could sell how? He could tell it how you want it sorted, and it could be a different field. And then our date dimension. There is an attribute called month number.
And then, we sorted by that. And then, you put two together, and that visualization, that’s how it accumulated at it, correctly.
Great question. How do you keep calendar current?
So, generally, the CSV calendar is kind of creating one time because if you create a calendar with rows, like the one to fall from Gregorian calendar from Cognos, I think, 1950 to 2040.
You’re done way out there. I think the question is maybe those relative time periods. Right? So, tomorrow, it’s your time periods change. And that’s because it uses that macro language. So, it kind of runs against that and calculates.
Yeah, well, the actual, then, I didn’t touch on this real, do touch it on and the creation of relative time.
I got the question, well, is relative to what there’s a parameter built-in the Cognos called the As of date. And, by default, is, generally, the system date.
So, today’s date, so it’s anchoring against that.
Now, you can change that behavior by exposing the as update parameter, and then dynamically changing it in the interface.
So you can change the anchor of the date by user, a user can do their own thing if they wanted to do that.
Interesting. And then the, there’s a question to the Gregorian calendar that you used in this situation.
Did it come from the sample’s? Yes, it did.
What I did was I, used the Gregorian calendar, and I saved it as my own, or a different name.
Then I modified Gregorian Calendar to link to a different CSV spreadsheet, which is the one I modified, the additional columns.
And have you, there’s a question about, have you created rolling period and, or by any M calculations for relative dates?
That’s expanded by an M you know? That means they’re like, it’s like, biennial like, maybe.
Uh, I don’t know if that’s every other millennium. I don’t know, I’d have to Google that. I’ll.
Just say one thing, you know, From what I’ve seen in there, if you’re good at coding, and figure out how they’ve done things, like one, rev, like the initial rollout rollout a relative time didn’t have weak slices, like this week versus last week.
Then the next Rev, they put it in the week.
They added a column for a week and all they did was the same thing I did, and they call that a rather release.
So to me, if you know what the requirements are, you might be able to continually keep adding, and creating your own custom relative time scenarios once you get the basics down of how they do it.
And it’s basically, they have this huge spreadsheet with all the dates in the world, that time and then different slices of it, starting with the date and then prior date.
Next date last year, you figure that out. I think you could really customize it a lot.
Sounds good, right? You can sort of play off the ones that are there, and you’re just tweaking that syntax. So somebody asked about providing the link for the webinar. Reference that covers the creation of the data module, so you can pull that back down.
And I think the it was one of those three that we shared a few slides prior right, Pedro? that where you kind of go into how those are created?
Are you talking about a different presentation?
The different presentation. Yeah, I think that’s one of those.
Yeah, so go ahead and you can look.
Either you get that link or just do a search in our, on the resources tab for data modules and that should narrow it down sufficiently where you can find it.
Power data modules in the files within them refreshed.
So, as an attribute of data of a dataset, you could create a schedule.
Good question because once you create a data set and extracted from databases stale right away.
But generally, if you’re coming from data warehouse, the ETL runs every day, or once a week, depending on your refresh cycles, so you can sync that up. And you could have a schedule for each dataset to refresh itself, say, every morning at 6:00 AM.
You could also provide triggers a little more advanced.
But, you got there, the way to do a trigger default, out of the box, you can actually just put it on a schedule and have it run by itself, so you don’t have to worry about refreshing it.
The other question here is: Is relative time available? An 11 1 7, And the answer is yes, It is. Yes, there’s a question of, what was the name of the webinar? That shows how to use relative Time.
And that was, not sure what the name of that one unless you did it, actually, isn’t it the data module architectures or how to successfully implement self-service? Might not be all that particular links there.
I think there’s another one out there for relative time, after you’ve got to send tourists Angular resources, and you can search for all the Cognos stuff. I’m pretty sure it’s still out there.
We’re out of time.
That one actually goes into how we actually build it, step by step.
Let me see if I’m just doing a quick search here.
Yeah, so, it’s called Using the Cognos KPI Capability and Relative Time structures.
That’s just search for relative time, and you’ll get that thing will pop right up then you can poke around it there.
Let’s see. How are and then how our datasets refresh? We already kind of answered that question, I think.
Do we need to have a JDBC connection to create the framework manager and data sources from that data sources to leverage data modules?
So, when you, when I build the datasets, I’m coming off a package.
So theoretically, if you got an old CQL package.
It should work, because I’m just querying the package.
Most package’s nowadays are I have a JDBC connection with the QM but whatever the package is set up for I point.
I say, I point to my data, my package, and I say create dataset.
And then it’ll basically create the dataset. Once you are in the data modular world with datasets, you’re completely in Dynamic Query mode.
And you don’t have to worry about any of that stuff, because you basically have not disconnected from the database.
Compare creating the summary metrics in the data module versus in database views.
Well, I don’t think there’s any comparison, at least from what I’ve seen, if you’re going to go against the data-based view.
You’re basically going across the latency of a network into the data base, running a query, waiting for the Oracle or SQL server to return the results come back across the network and then display it in Cognos.
When you’re using datasets and data modules, you’re not, you don’t have that latency anymore. You’re accessing a file on the Cognos server
and then, the cool thing about datasets once it’s access, once it goes into memory.
So, once it’s in memory, you’re no longer doing disk access. So, I don’t really think that’s comparison.
OK, is there any limit on how much volume of data that you can cache in a dataset for refresh? Yeah.
So, I always start off by saying, it’s not meant to bring over the billion-row table.
So, you definitely need to figure out some limitations there. What is the data you really need?
Don’t bring everything over from perspective of they’ll bring over all the columns that’s going to eat up your memory.
Figure out what you need.
Now, from a sizing perspective, I’ve, I’ve brought over tens of millions of rows in a dataset.
You do need to worry about a little about sizing on the memory.
If you got up for one gig Cognos machine, memory wise, that’s not going to work too well. The more memory, the better.
Tune up your data cache settings for data sets, you can actually create bigger memory footprint, so it’ll stay in memory longer.
So, yeah, those are all the things you got to consider, but the biggest thing you can say, I don’t bring over a billion rows is stuck on, it’s not meant to do that, is meant to solve a specific, high level summary, analytical question. If you need agility detail, you can drill the data back to the operational source.
Is there a way to store datasets in another location that’s off the cognitive server?
So, datasets by default are stored in the content store database, which, I think, as you’ve been getting using that, you definitely don’t want to do that.
When you change the configuration, it says in Cognos, you can set it so that datasets are stored on a file server a disk, and that could be anywhere, right?
But you do want to take into account any network latency as you cross and save the files, the dataset files, to a particular cerebral location.
And then, when is a dataset released from memory, after the report runs, or after a user session?
It’s round robin in nature, and depending on the footprint.
So I access a data set. It goes into memory.
Let people start access datasets fills up memory and out of memory as newer datasets come in.
That’s how that generally works.
Well, that is the last of our questions. And it is one minute after 12. So I think we can put a bow on this and call it a day.
I’m going to go over the last slide. First of all, thank you to Pedro for another great session.
And thank you to all of you for taking time out of your busy schedules to spend a little time with us here at Senturus.
If we can be of any assistance with, anything, business, analytics related relative to any of the stuff I talked about, feel free to reach out to us at the 888-601-6010 if you actually still use a phone anymore. Otherwise, you can e-mail us at [email protected] or hit us up on the chat here. And we look forward to hearing from you and seeing you on one of our next Knowledge Series events. Thanks a lot, and have a great rest of your day. Bye, now.