How to Successfully Implement Self-Service Analytics
Agile, governed self-service BI with a focus on Cognos Analytics
Learn what it takes to achieve the powerful reality of agile, governed self-service analytics with any BI tool, and with Cognos Analytics specifically. Expensive self-service BI implementations often serve as nothing more than a simple data extract tool that eventually feeds downstream processes into Excel. Whether you’re running Cognos, Power BI, Tableau or a combination, watch this on-demand webinar to get valuable information for achieving well-adopted self-service that delivers exceptional ROI.
Self-service analytics topics we address
- What to expect if self-service analytics lives up to its promise
- Best practices for achieving the goal of self-service
- Self-service requirements: discovery vs. gathering
- The value of the semantic layer/business logic to enabling self-service
- Pros and cons of the three main types of Cognos architecture
- IT-driven enterprise model
- End-user driven model
- Hybrid model
- Cognos self-service components
Cognos Data Modules
Cognos Framework Manager
Principle BI Analytics Architect
Pedro joined Senturus in 2010 and brings over 20 years of BI and data warehousing experience to his role. He has been instrumental in implementing data warehousing systems from scratch and has experienced the evolution of the BI industry through several iterations of BI products including Cognos, MicroStrategy and Tableau.
Greetings and welcome to the latest installment of the Senturus Knowledge Series. Today we’ll be discussing how to successfully implement self-service analytics.
Before we get into the core of the presentation, some quick housekeeping items.
Please feel free to use the GoToWebinar control panel to help make the session interactive while we have the microphones muted out consideration for our presenter.
We encourage you to enter questions through the GoToWebinar control panel, and while we’re generally able to respond to your questions while the webinars and progress, we save them to the end, and in general. So, stick around for that. If we’re unable to cover it during the Q and A session, we will provide a written response document that we’ll post on senturus.com
which leads us logically into the next slide.
And the question we get early and often throughout all these presentations, can I get a copy of today’s presentation? And the answer is an unqualified, yes.
It is presently available on senturus.com. Go to the Resources tab and look at the Resources Library.
Alternatively, you can click the link that has been posted in the GoToWebinar control panel here.
And, while you’re there would be sure to bookmark the Resource Library as it has tons of valuable content, including many other webinars such as this one, and other interesting information addressing a wide variety of business analytics topics.
Our agenda today, we’ll do some quick introductions and then we’ll get into the core of the presentation on self-service analytics.
We’ll define self-service analytics, go over some misconceptions, discuss some best practices, review some requirements at a high level, discuss Cognos architectures that support self-service analytics.
After that, please be sure to stick around through the very brief Senturus overview for some additional valuable and generally entirely free resources and the aforementioned Q and A I’m pleased to be joined today by my colleague, Pedro Ining, Principle BI Analytics Architect here at Senturus.
Pedro has been with us for a while since 2010. It has over 20 years of BI and data warehousing experience. He’s been instrumental in implementing data warehousing systems from scratch and has experienced the evolution of the BI industry through several iterations of BI products, including Cognos MicroStrategy and Tableau.
As usual, we always ask for the try to take the pulse of our audience through our polls.
Our first poll today is with which BI platforms does your organization do self-service reporting and dashboarding? And this is a multi-select, so you can select all that apply.
Do you use Cognos for self-service reporting and dashboarding Tableau, Power BI or, something else? Or, not at all. You haven’t adopted self-service reporting.
We’ll give you guys a little time to answer.
You must be well, caffeinated today, we’re nearly 80%, barely 30 seconds in.
Go ahead and check those boxes.
And, I’m going to go ahead and close that out and share the results back with you.
So, the preponderance of this audience, this is a little surprising to me honestly, 85% using Cognos for that, then, a third or so, closing in on 40% evenly split between Tableau and Power BI.
Interestingly, 16% other, and only 5% not doing it, so, that’s interesting.
I guess, not surprising, that self-service analytics is definitely, the hot topic today.
The second question we have that I’ve presented here is, What percent of your Cognos platform is due solely for traditional, canned reporting?
Please select one of the above a quarter, half, three quarters, Or, all of it. Or, you don’t have Cognos or? You don’t know?
Got about three quarters in here. I’ll give people a few more seconds to answer.
All right, great.
So, about half are using it, 75% for traditional kind of canned reporting.
Almost a quarter are using it 100% for canned reporting, and then a smaller percentage using it half or less for canned reporting.
So that would be especially predominantly self-service, that flies a little bit in the face of the last question or maybe I’m misinterpreting that.
But anyway, very interesting. So, thank you for sharing that.
And with that, I’m going to hand the microphone over to Pedro, Pedro, the floor’s, yours.
We did a version of this webinar last year and a high percentage of Cognos implementation is still doing a lot of canned reporting. And, I think that’s indicative of the legacy. Its kind of shows the legacy of Cognos or kind of where it came from. This webinar, obviously, is on self-service analytics.
And there’s been an explosive growth over the last few years, 5, 10 years or so, on the products coming out, which really promote self-service analytics, self-service BI, do it yourself, your own way.
Products like Tableau and Power BI are making quite a lot of inroads, click in all these products are more around, you know, let’s give the users, access to the data, doing your own way, create your own visualizations.
And, even IBM Cognos is now, for many years now, has been called Cognos and it was Cognos Analytics, throwing in the Analytics words in there, too, because they have also morph their product away from just simply Canned reporting, IT reporting, Very structured, centrally maintained metadata to more of a self-service model, and, for those of you out there who are that have been using Cognos for quite a while and have not really explore their analytics portions of the product. A lot has changed, and they’ve basically been having to keep up with the Tableau and Power BI is to make their products more self-service, capable.
And we’ll go into that a little bit.
The focus of the first part of this presentation is trying to pull back agnostic away from specific aspects of particularly BI tools, but talk about self-service analytics in general.
Let’s look at the question, what is self-service analytics? And there’s a lot of a variety of different definitions out there.
I’m going to come up with one, you might have your own definition, but let’s look at the one I kind of have over here.
So, self-service analytics allows business users to access more data sources on their own, potentially modeled our own data, then create reports or dashboard visualizations with very little help from IT.
So, in the old paradigm, you know you typically have to have requirements. You have to have IT model the data. You give them samples of reports. They go away for a few months. Do they come up with version one of the reports, you make them change that, your kind of tied in the way the model works. You know, self-service analytics is actually pushing more of some of the modeling capabilities back into the user’s hands.
And IT, then, doesn’t have to be that report factory, that army of developers out there that has to create 10, 15 reports, particularly for each business unit, right.
The outcome of that is that this will lead, and can lead to faster and more Agile data analytics, as compared with your traditional BI, development, SCLC life cycle.
So, that’s kind of one of the goals, you know, what the definition of self-service analytics is.
And there’s definitely a promise of what these products are trying to offer you.
And this whole paradigm is trying to offer you, in terms of promise of self-service, right, giving business users direct access to data and reporting tools, we’ll remove the reporting burden from IT. That’s one of the promises, right. We’re going to give these tools out to users, and then IT will not have to be that report factory anymore.
Analytical decisions can happen more quickly, obviously, where the whole aspect of this is to drive business growth. We don’t have a lot of time to create reports. We need to make some, and we need to get some answers to them, some very important business questions, which will make our decisions faster and quicker and more reliable so that we can be more profitable business.
The new modern BI tools will provide more visual insightful, automated analysis, right? The tableaus, very visual, cool graphs. You know, we can get a lot of inside of that, and it’s faster to create.
These are some of the promises of self-service, but then, we have to ask our question, you know, why do so many self-service, analytical BI efforts go wrong, Right? And, this word, self-service analytics. It’s been around for quite a while. I mean, initially, it was all about modeling the metadata, putting Cognos on top of it, and giving them query Studio Report Studio Now, Studio, whatever.
Even, the aspect of really just trying to run manual queries against a metadata layer, initially, and then now we have these newer tools of more visual insightful tools.
There’s still, the implementations are still kind of hampered and things definitely go wrong.
So there’s some common myths misconceptions around these tools and implementations that we need to kind of go through.
Well, one of them is basically, you know, there’s the idea that, you know, we’ll install it.
We will build the platform out, know, we’ve already got, maybe, a data warehouse. Will put our Tableau on the desktops of 200 users, you know, build out the platform, maybe, but Tableau server up. You know, we’re done given to the users off they go, No. But then, there’s a whole host of problems that people encounter from that idea, from misconception wise.
The other one is, you know, we eliminate the need for IT. So, the misconception is, we’ll put this, again, these tools out there.
But, in reality, no data is complex and complex data still needs to be modeled.
The BI tool is a doorway to access the data in the organization.
And sometimes, if you just give them complete raw access without having a good semantic layer and govern data, they still won’t get the right answers. Number one.
They’ll still need to come back to IT and they’ll may just throw their hands up in there and say, IT. Can you just create me there, support those tools to art.
A modern self-service BI tool will be a successful project from a management perspective.
I’m going to spend a lot of money on these new tools, and it’s going to be successful, because of all the things we’ve talked about.
But if your foundational data layer, your data architecture is not strop, if your governance is not strong, again, putting those tools out there for users will not automatically lead you to a successful project.
Users were automatically understood how to use a tool very slow. A bunch of smart folks out there right now, I think the democratization of analytical tools, the whole data science field, very smart people.
But there are also people who are just really trying to get their job done as well, and assuming that your typical financial analysts can just be plopped into a tool and understand how to use it with minimal training is short-sighted. So, change management still always have to be an issue. Thing, we have to drive towards user training. They’re all critical, OK? And these are just some of the misconceptions and think we have to clear up and establish before we started our project, before we start implementing analytics there.
So, if we focus on a couple of areas, if you want to implement a self-service analytics project, a self-service Analytics BI tool, platform infrastructure, there are best practices. We’re going to run through that can definitely help you with that.
Um, and the whole requirements area, you know, we’ll talk a little bit more about that, Kind of calling it more of a requirements discovery.
Sessions working with users, not just to implement a technical aspect of the tool, but working with your end users, your business users, users, your business sponsors, to really understand what it means when they want to be able to have access to the data and answers to their business questions. This is something you need to focus as well.
Technically, we’ll pivot and we’ll talk about specifically some Cognos self-service architectures now, that are in play that you may not know about, or maybe you’ve heard about on how we can implement self-service architectures on Cognos and, we’ll talk a little bit about that.
So, best practices, number one, Obviously, data governments, continued data governance is very important data, Warehouses are not going away.
There was a lot of time spent in the nineties in 2000 of establishing very, very robust, vigorous govern data warehouses. Where we finally nailed how to make a customer nano dimension, a product dimension usable for the entire organization, OK.
This curation, the key enterprise data, and governance is going to be critical component for continued data quality.
OK, we’ve actually sent tourist has a topic on this is called Why Bother with Data Governments. And I suggest you look at that as a complete topic on its own.
But we can’t ignore that. We can’t ignore that at all.
We can’t have people just plop in tools and then bypassing data warehouses that are very QA it and go directly against the ERP, for example, and try to create their own custom dimension.
We have to intercept that process early on, and make sure that they’re pointing to the right area, because there’s been a lot of work in that area.
We’re going to talk about these requirements gathering techniques that had to go beyond the typical questions that IT likes to ask any users, you know, what? Do you, what data you need? We need to have more of a discovery process.
Kind of drill into that semantic layer, the semantic metadata layer is very important.
But from the perspective of self-service analytics, we need a layer, you know, that doesn’t overburden you users, we have to have make efforts to, to make a semantic layer for users that commonly used metrics, things like that. Things that we’re going to find out from the discovery session, simply serving up a fact table and dimensions, and think you’ve created a semantic layer that is usable for your end users, will not cut it.
So, we need to be able to work in tandem with the business units to find out what that semantic layer should be, that would help them out.
Of course, training, be like, capability training. You know, there are different types of users, so you need different types of training for different types of users. This is critical.
You know, maybe a blanket training approach might satisfy some, some groups, but some groups will need very specific kind of training. Maybe your eventual early consumers will blend graduate and they could become producers of more elaborate BI, objects and dashboards as their capabilities increase.
So, we need to be able to bring the users along with good capability training.
Performance tuning, I have it up there because, you know, you could have the best BI tool in the world.
And if they drag a measure on to that dashboard or report and it’s going to take 45 seconds to render, you’re going to lose folks.
You’re going to say, they’re going to say, this is too slow.
It’s too complicated.
They want it, they’re going to want to reach for something else. They’re going to, want to go somewhere else and I think out there in Cognos land, you probably hear a lot of that where, you know, what is too slow?
For whatever reason, that report that I run is too slow, but I’m going to schedule that report and just dump it to Excel and then continue on by analytics. And Excel, or X, extract that data out of that BI platform and put it into Power BI.
So, a lot of, a lot of companies struggle with this because a lot of people still use those Cognos tools, like the Cognos platform as an extract tool. But, even in Power BI, I’ve seen some dashboards that do not perform well, and you start losing people, right?
My last bullet is really a recognition that, you know, self-service will handle a certain percentage of your organizational BI and reporting needs.
But there’s, you have to recognize that not all areas are suited for self-service examples of creating very specific layouts reports for regulatory filings. For example, you have to send something to the government. You have to. Customers want a PDF 8.5 by 11 with all this data on it with certain font sizes.
Sometimes you got to send it back to IT, you have to actually, maybe this is beyond my capabilities, I don’t want to be able to learn.
I don’t want to learn about hiring how to write a PDF report that looks exactly like this, Bring it back to IT development. So, there’s going to be areas like that. There’re some reports. Reports are never going to really go away.
So, we have to be able to know that self-service, no warning, will answer all our needs. But sometimes, we have to fall back to the reporting areas.
So, requirements, discovery, versus Gathering reports.
So, usually, we get into a BI project, and we want to gather user requirements for BI even self-service, and there’s obviously a very inherently difficult task. And, you know, as data and BI professionals, we typically start requirement sessions by asking some typical questions, like, What kind of reports you need?
Do you have samples of the reports that, that you can show me?
Data, do you need, what are the measures you will eat and what are the hierarchies?
You’ll need, do you need dashboards, and these are all important questions that usually leads a BI professional, jump into a more comfortable discussion, tables, and fields, and report.
And it kind of little basically stems from the base questions. You know, what? We are the order takers, basically. What do you want?
Or what do you want me to do for you?
Really, the users don’t know, or realize what they don’t, know, what. They don’t know what they want from an analytics system mission.
Turn generates a lot of churn, you know, you come up with reports. Then you have to redo them, and all those kinds of things. So, we need to kind of maybe think about what we’re doing.
We want to discover requirements.
We want to, we want to determine, understand the client’s goal. We want to think of it from a global perspective.
We want to understand the current business process, our workflow that a user is doing. Before we even get to the point of what are the measures you need, no other kind of reports You need. We need to discuss existing pain points.
Those existing pain points might really like night show a lot of things that are important for a self-service tool.
What we want to do is consolidate with a user story, you know, we want to bring all these things together from a requirements perspective.
Determining and understanding the client’s goals, so usually this is a discussion with a client that is independent of the BI tools feature.
We wanted to understand what the goals are.
And oftentimes, an end user’s goal is in lockstep with their job responsibilities, Yeah, for example, a C level executive may want to see a current snap, a snapshot up FTE headcount.
How can parents with last year, staffing manager, just want to see, uh, head count at a location and organization, and what’s experienced the most growth, because they need to plan and forecast future hiring plans? You know, these are kind of discussions, you’re going to have, maybe a data at a director level.
Management would like to analyze their current workforce, see exempt versus nonexempt, describing what parts are, or of the organization, are experiencing the most attrition.
You know, these are some of the goals that different types of users in your organization may want to get to through the analytics tool that you’re going to be implementing.
Understanding the business process or workflow.
So usually, a user’s workflow starts with some sort of triggering event.
An event that requires them to seek more information.
Then it ends with a set of tasks they need to do. They need to analyze. We need to analyze these workflows and find out what?
What are the details of going from the triggering event, to finding out what the tasks are, what the workflow is, and then the goal, OK?
Maybe the triggering event is, they have to get a regulatory filing out.
Maybe they’re looking at a metric on a dashboard, or report that suddenly goes beyond a threshold.
You know, does a user review the report on a daily or weekly basis, and what are those steps that contribute to the user’s decision making?
Here, we might find what that friction is in terms of like, which steps cause the most friction?
When they do, to create the actual goal here, is the current process too reliant on canned reports.
You know, so, as you go through this and you find out that one of the friction points again could be that the current self-service tool goes only to a certain point. And I have to extract that data and I got to bring it down to Excel.
And I got to put things together to ultimately produce the analytics object as a goal for my next step.
So, walking through that, not really talking too much about the BI tool, but understanding what their business process or workflow will uncover quite a lot of things.
So understanding your users issues with their current workflows, because it’s important thing, then after you actually get to a certain point, create a create that user story based on your findings, you know.
After you go through all that, this is a typical story that you might come out from your findings, like here is a story, John as an HR analyst, for example, and he has been requested to analyze the workforce FTE headcount.
And the company currently reviews work the current workforce and wants to see how the FTE headcount compares to the same period last year. Right. So, there you go, here’s a comparison metric that I that he has to do, OK? Is management would like to see the tenure of the workforce? They’re also interested in seeing the breakdown by organization and location.
And then, he gets to a certain point in this narrative where he’s been working say, we’ve given them a self-service package in Cognos, the headcount package.
And he sat down with some of the people from MIT to understand the components of the package and how to query the package.
But he knows enough to be able to extract data for Excel because one of the one of his sticking points was, for example, he has never been able to calculate the change in FTE headcount year over year in Cognos. And he finds that that task is easier if he just extracts very slices of the data to Excel or file reporting.
In addition, John has to integrate other datasets into his final reporting.
It’s not currently available in the Head Count package.
Maybe that Excel file, he gets from PeopleSoft, has a service dates which relates back to figuring out the tenure of the workforce.
And that date field is not in the headcount package. He doesn’t want to go back to IT to figure out how to do it, but he can get an extract from PeopleSoft. And he has to merge those things together in Excel, to come up with that final report.
That final report is then primarily made up of Excel sheets and it work book.
So, what we have will be fine here, is that, yeah, here’s go, as an analyst is doing, is doing FTE headcount reporting year over year metrics, but his current workflow uses Cognos.
The self-service package, but, it’s used as an extract tool.
And, he completes reporting in Excel.
And, he’s got a variety of process pain points.
And, one of the things that are missing are key calculations that are not available in the package multiple extracts or download an Excel because, for example, offline data needs to be integrated into the final report.
So, that’s kind of an agnostic way of looking at things, in terms of trying to establish some requirements, trying to understand some of the pain points best practices of self-service analytics.
We’re going to go now into a little bit of the technology specifically in Cognos of how we could derive some self-service semantic layer architectures. OK.
So, Cognos 11 We are now at 11.2, uh, Cognos 11 series started with 11.01 at 11.1.
The last release of 11.1 was 11.1, really 7 and 11.2 was just recent release. I’m pretty sure a lot of folks have move that out to production yet, but every time they released something, there are new things evolving and adding to the toolset.
But some of the key Cognos 11 self-enablers I’ve listed here and the very first bullet, FM packages Everybody knows that FM modeling is a classic, a modeling tool that most Cognos 10 users know about 11. It’s classic.
And, I say, but it’s still useful because we can still use those FM models and packages as a base to help with enabling self-service.
In the newer technologies, you’ve done a lot of work modeling those FM packages.
You’ve done a lot of work in modeling your data.
You know, we don’t really just want to throw those all away, but they could be used in helping you create self-service environments using the Word self-service, enabled, aspects of the 11th Series product, such as a second bullet, Data modules, which is a very user end user centric modeling tool.
OK, which allows users to not have to use Framework, but we could use Framework packages as a source for the Data Modules datasets.
This allows users the ability to extract subsets of data from Cognos and stored on on Cognos without having to constantly query the database.
That technology basically also exists in Tableau, in Power BI with terminologies like data extracts and datasets, right?
And dashboards, enabling self-service, enabling user defined dashboards, users creating dashboards are getting easier with each release.
And, of course, reporting basic reporting doesn’t work workhorse of most analytical tool sets.
And I would argue that actually, the reporting tool, since it’s been around for so long and has had, has gone through a lot of iterations, is much better than the Tableau and Power BI versions of it, where those tools really focus on visualizations and self-service. Reporting has been cognizant, unleavened bread and butter for many years. It’s a self-service tool, but it’s, so good. And it’s been around for so long.
So, what I want to do now is kind of go through a couple of potential self-service models thinking about it in terms of Cognos. We have the first one: This is the classic IT driven enterprise model.
And, it could be a lot of great cases to keep using this model for a year environment, right, it’s, it’s IT centric modeling, low, slow on the SCLC cycle, slow to make incremental changes. But, if you have a very important set of Mali, it has to be done, it has to be very controlled, has to be very govern.
There’s the classic model where you have your framework tool modeling against source databases, generally done by IIT, publishing packages, and publishing reports, that end users will consume.
And sometimes, at the report, developing layer, maybe it is IT doing those reports and minimal amount of end users, or maybe they’re our end users doing self-service against those packages and creating their own reports. So, it’s still an option for that particular case.
Now, we have a newer one where we have IT driven still with Cognos Data modules, OK?
Um, this would be basically a data module that’s modeled against your data warehouse directly, or your online systems IT still controls this, right?
But now IT can make use of a lot of the benefits of data modules, such as the relative time features, the built-in data cleansing, data integration with Excel files, and things like that, they could also use this IT control data module as a source. The end users can use it as a source to link to their own data modules.
So this basically kind of takes the FM modeling out of the loop and goes direct for that.
And this is actually a nice feature in terms of, you know, I’ve got a brand new, maybe application database, and we’ve not done an FM model on it. And we are it’s a new project. You can start right away with the data module going against the data server connection against those tables directly. And you make that read only. And you make it, we not only as IT controlled, you still have that kind of governance around that.
On the far right of the spectrum is the end user driven model.
This allows your self-service, analytics, and end users here at the end, to take full, full control of the modeling aspects of it.
You have a data module here, which is going against a direct data server connection, and pulling in tables and databases from the data warehouse directly. Or, it could be an application database, and the end users are doing the models modeling themselves. Independent of IT is in charge the databases, making sure they’re up, making sure the application is running. And I’ve seen use cases like this in an organizations or business units where there’s somebody who has developed this application database over there. They know those tables very well.
In fact, they know those database tables better than IT.
And, maybe what they’ve been doing right now is just running manual SQL queries against this database, right? Or pointing Excel directly to this database with no control. Right? And doing their analytics. Just bypassing any other BI tool.
And, what we can do is have Cognos point to these databases: Expose the data module capabilities to your end users and let them do the modeling?
Then, they can take advantage that they can create, you know, your datasets over here that which can be then imported into other data modules and then they can also add their own files to the data module here. Offline data files.
This is full end user, you know, driven modeling in this particular model here.
Now, the hybrid model, I like to call this the hybrid model.
These leverages, the central IT maintain FM packages. This is, we still have a need for this, OK?
And instead of just throwing these away, we might create data modules off of subject areas of the FM model over here.
We, we’ve got a lot of work where we’re here, we could link to the FM packages, bring in those tables directly into the data modules, just by just by linking to it and leveraging, maybe even security here in the FM package.
They might have some very complicated row level security that we need to use, and then also integrate other sources of, kind of data here, in the data module, Through data server connections, back to the database, OK?
So, the end user, here, if you’re doing end user modeling at the business unit level, they would leverage this stuff here at IT maintain packages, for that purpose in creating their own data module.
We could also use these FM packages to extract datasets.
And this is the concept where maybe the FM packages have already modeled a very clean and QA product dimension, for example, OK. And we can create a dataset for that as offline. It gets refreshed every day, but then the end user here can use the data module and use that and bring it in to their own analysis over here for self-service. And then they can then leverage here, at the bottom end, users maintaining their own uploaded files.
And then, bringing into data models and in and bringing in those product dimensions, and doing their own models, using their own data.
So, those are some of the models that, that we can use in Cognos And, I’m going to go over to the Cognos environment right now, Justice, to physical eyes. Some of the stuff that I’ve been talking about.
I’m going to give you an example of both of a potential data module, metadata layer, and expand upon that. OK, so I’m going to go over here to My Content. And I have a data module here, and I’m going to go ahead and create a dashboard.
OK, so this is a self-service example of a metadata layer, right, and this very simple. For the purposes of demonstration. Here are my sales.
OK, I’ve got measures in here and I can slice it by the invoice date because slice it by product.
I could slice it by sales locations, for example.
And if I drag in my sales, total excluding tax, I drag that in here, and I get total sales, for example.
And I could also then slice it by invoice date over here, and maybe what I’m trying to do is find out for a particular calendar year.
Look at 2016, I’m done, OK? So, I’ve got basically, the measures.
I have, I need to be able to do this kind of analysis, right, but then what happens is, users will say, OK, that’s cool, but now, how do I do, for example, year over year data analysis, I want to be able to compare current month’s sales: two current months Last year, are year to date, versus year today, last year.
So what we’ve done is we’ve expose the based measures over here, very simple star schema, with base measure metrics, But then end users will have to then go ahead and try to figure out how to calculate that.
Um, so, one of the things we want to do in terms of a good semantic layer, like I alluded to before, was to really work with the end users, is to find out if they do need to do those kinds of calculations in slices. and with Cognos.
What we can do is implement a relative time features built into data modules, so if I expand this particular metric, now I expose those slices to the end users, right, there is my year to date.
There’re my current month’s sales.
Here’s my same one sale compared to last year.
It’s already built in.
Then when a user could do is I’m going to go ahead and delete this.
And I’m going to bring in a new visualization here called the KPI Visualization.
And now I’m allowed to be able to compare these two measures together.
So I’m going to bring in current month sales as a base value.
And then I’m going to compare that to same month last year as a target.
In right away, I can see that comparison and I can go ahead and clean this up.
And this is like current month sales year over year.
And I’ve basically got what I want to display.
Now, the point of that is that there was just a couple drags right? These are already built in here. I didn’t have to calculate everything. Anything.
And this is just a small example of how we can help make the metadata letters semantic layer easier for people to use by just even implementing this as simple technologies like that which is fairly easy to do now in Cognos data models, OK?
The other aspect of this was allowing users to bring in spreadsheets and data offline data. So, I’m going to go ahead and minimize this a little bit.
Get out of my presentation here, and typically, we have a spreadsheet. I’m going to open up the spreadsheet. This is on my desktop.
And, this is a spreadsheet that could be used by users that is created by very defined process, you know. And, to be frank, you know, a lot of times, these Excel processes are, have been created over the years. They’re not going to go away.
People get the sales goals from the organization via e-mail, via different planning systems. They, they put this together, and they need to be able to compare the sales goals, to the sales.
Right, and I have this spreadsheet here on my desktop.
And it, in before, and older systems to try to get this data up in the data warehouse, wasn’t sure, it would take a long time to do.
So, one of the self-service aspects of Cognos that we could do now is simply drop that spreadsheet into Cognos, so, I’m going to take this. I’m going to close this dashboard.
I’m going to take this. I’m going to drop it on top of my Cognos environment. You can see is analyzing, analyzing the sales goals.
OK, and it brings a sale go in, and where does it put it, it puts it in the My content, at the root folder over here, Sales goals, and Then all I’m going to do now is I’m going to go to that data module that I had up.
And I’m going to bring that in to my data module.
So, I’m going to say Add use sources, and I’m going to go to my Content, and Content. And.
Find it, where to go, where it is. It’s already because it’s in their hold on a second.
My Content, and here’s my Sales Goals.
OK, and Add You Sources!
Some reason, it’s not showing up all a second, guys.
I’m going to go ahead and bring in, Create.
Try that, again, sales goals, also, by the way, these Spreadsheets Here, you could you could just directly query with a dashboard. Let’s take a look at that and make sure that works. OK.
There’s my data on a dashboard, and I could do queries against that. And let me go ahead and close that out.
And go back into My Content, self-service webinar, Data Module, and say Add.
For some reason is not bringing it.
I’m going to have to close this one out again. Live demos. These things happen.
Do this again. We try this one.
And new sources, there, we go. Hm. Hm, I have to think about that one, OK.
I’m in a data module on bringing the sales goals in, Yeah, OK.
There is my sales goal now, right. So, I’ve integrated spreadsheet data into my data module. I’m going to go ahead and save this.
OK, I’m going to go back to my content, and let’s do a dashboard off of that.
There’s my Sales Girls and there’s my sales goals and total excluding tax.
Let’s go ahead and bring in the KPI widget and this remember, this is a spreadsheet now.
And I’m going to say that one.
And then, against the sales, go, target value.
There we have it, right? So, I brought in my spreadsheet.
Then I can actually join these against the different parts of the other data module and do my analysis.
So I’ve augmented end user data module with offline spreadsheet data, and enhance that. And the end user now is able to do this much self-service perspective versus extracting the data from the data module.
And then bringing in the spreadsheet data into another spreadsheet and combining the data together, or trying to get those sales goals back into the data warehouse itself. OK, why didn’t work any other way? I think I’m out of my rights issue or a buffering issue, but that’s a live demo.
But basically, that’s what I’m trying to show you here, is that aspect of the hybrid environment to where you could actually see bringing in spreadsheets in.
So, let me go back to my presentation, though.
Let’s see where we are.
We are at here.
OK, so in this example here, those are the spreadsheets, spreadsheets have brought in, I brought it into Data Module, self-service analytics, and then also combining it with data in another data module.
So, in summary, no self-service BI implementations really require a tight partnership with your business sponsors, OK, if you involve your business sponsors early in the game?
Then you’ll have a nice, if you bring them in through all the requirements gathering and understand their goals, their workflows and processes, you will have a nice understanding of what their real needs are.
And this snazzy BI tool does not mean an automatic success. Remember that, OK.
So, a good tool is implemented properly will be a good foundation.
If you have the foundation there, then you will have a very good start on a self-service analytics platform.
So, technology, we have the technology side, we have our business partnership, you work the two together, not in isolation, that in silos, You put those two together, and then you’ll have a successful, at least a good start on a successful foundation for four analytics, self-service analytics.
I’m going to turn it back over to Mike.
Good stuff, Pedro, thanks for throwing that little glitch in there just so everybody knows, that was a real glitch. Yeah. That’s right.
Great, so thanks for that content. Stick around, we have some good questions in the Questions Pane.
Take a look at those. If you can, sort of multitask. If not, we can certainly tee those up after.
If you have other questions, please get those in while we go through this other quick information.
At Senturus, this is what we do all day, every day we get out of bed, put our pants on or really our jammies these days, I guess. And, we do analytics, right, and we help people realize self-service analytics.
To that end, we have a lot of different resources, some of which are related to this, A couple of webinars here of Framework Manager versus data modules, there’s a blog on Framework Manager versus data modules and data module architectures and use cases.
And there’s one on the data modules, new capabilities, and that’s just kind of the tip of the iceberg, right?
There’s hundreds of webinars at this point, upwards of 200 or 300.
So, definitely head on over to our Resource Library and check those links out.
Likewise, on the next slide, how can we help you with this stuff? A lot of times, you know the basics of this can be straightforward.
But really, getting to self-service is multi-faceted, complex process, and this is something that we help people do all day every day.
So, if you want to talk about it, and discuss how Senturus might be able to help you, please go ahead and check out that link, there, we have a link where we discuss some of those things in our framework. Or, if you want to talk to one of us, you can reach out to us at [email protected]
or call 888-601-6010.
At Senturus, we concentrate our expertise on modern business intelligence with a depth of knowledge across the entire BI stack.
Our clients know us for providing clarity from the chaos of complex business requirements, disparate data sources, and constantly moving targets.
We have made a name for ourselves, because of our strength, bridging the gap between IT and business users, delivering solutions that give our clients access to reliable analysis ready data across their organizations, so they can quickly and easily get answers at the point of impact in the form of the decisions they make, and the actions they take.
Our consultants are leading experts in the field of analytics.
Folks like Pedro, with years and years of pragmatic, real-world experience, and experience advancing the state-of-the-art, we’re so confident in both our team and our methodology that we back our projects with an industry unique 100% money back guarantee.
Likewise, we’ve been doing this for a long time, over 20 years, now, 3500 plus clients, and 3000 successful projects.
We’ve worked across the spectrum from the Fortune 500 to the mid-market. no doubt you’ll recognize nearly all of those logos on the slide there.
Solving business problems across virtually every industry and functional areas, including the office of finance, sales and marketing, manufacturing, operations, HR, and IT.
Our team is both large enough to meet all of your business analytics needs, but small enough to provide personalized attention.
If you like what you hear, and you think you might be cut from the same cloth, you can look into joining the Senturus as team. We’re currently hiring talented and experienced professionals. You can see the job titles there.
Can drill into those a little more senturus.com, at the Link below. Send your resume to [email protected]
Again, another invitation to expand your knowledge. We have hundreds of free resources on our website, from our webinars, on all things, BI to our fabulous, up to the minute, easily consumable blog.
You can find that again over at Senturus.com/senturus-resources
Our next upcoming event, Power BI Enterprise deployment.
You can, again, go over to senturus.com to our events page and register for that. That will be on Thursday, June 24th, so in just a couple of weeks here at the usual time and channel.
And then, finally, I’d be remiss if we didn’t talk about our complete BI training offerings across the three major platforms.
We support Microsoft, Tableau and IBM Cognos.
We offer all the modalities from tailored, instructor-led group sessions too.
Small group mentoring to instructor led online courses and self-paced e-learning. And we’re ideal for organizations that are running multiples of these platforms, or who are moving from one to another.
We can provide training in these different modes, and can mix and match those, to meet the needs of your organization.
And last slide, before we get to the Q and A We provide hundreds of free resources on our website, as I mentioned, a little earlier, and we’ve been committed to sharing our BI expertise for over a decade.
With that, we’re going to jump over to the Q and A, if you want to go to that slide there, Pedro. I don’t know if you’ve had a chance to look at any of those. But there are a lot of questions really pertaining to data modules.
First of all, how does the security work for data modules in terms of object security, and who will control the security for those data modules?
Right, OK, so, uh, several questions rolled into one.
So, basically, data modules within the data modules. You can secure relational data sources, for example.
If you want to secure a dimension and have certain groups only see certain regions right through the Cognos Namespace Security groups, there’s ways to do that there. I would have to say, also that, you know, you don’t have all of complete fudge role level security feature sets as framework manager.
Right, framework manager. You could have security table, as you could be very, very, very complex security requirements. Enable their through MACRA substitutions through parameter of maps. That kind of stuff is not quite there yet And Data Module and the fact we did a Webinar Data Modules versus Framework Manager was there. It was not there. So.
At a very minimum, you could secure the data module self to be not be seen or are shown to other people. Inside the data module you can secure levels of a dimension through Cognos security groups and this all there from relational sources.
one thing that is missing within the data module technologies right now is to secure objects. So, for example, I’ve got a table, product dimension table. I don’t want a certain group to see that. I cannot say, secure that product dimension table in the data modular and when this group logs and the data module they see it, and when group B looks logs into the data module. They don’t see it so that it’s not there. So, we’re hoping that some of those things get chains. I’m sure that that’s already actually in the works to enhance some of the more elaborate features of security.
In the data modules, but if you actually use framework, manage your packages as a source, for example, that has embedded row level security in the FM package. And you link a table in a data model to the package. It will respect the security of their framework package, and that’s one example, to where you can use your framework package as a source of data module, because one of the main reasons is because you want it to respect the security of that framework package.
Thanks, Pedro, and there’s all kind of a lot to unpack in that one.
Along those lines, when you do bring an FM into a data module, is it a reference or a copy?
It’s a link. It’s a, it’s a reference to it, right. So, when you bring it in, and it looks like, one package. And then you could expand it, and then you can actually link to the actual table. So, it’s not a copy. It’s not like we’re copying data into the data module. You actually, it’s a reference to it and when you expose it into data module, and you run a query against that table.
It’s going to run a query through the FN package.
And then back to the database, it’s going to do it that way.
Can you speak to the level of skill, skills, or expertise required, from a, I guess, the same, from the end user point of view, to handle data modules? So, I mean, what, what does it take to create, and, or, I guess, maintain those.
Yeah. So, it could be various. I’ve seen use cases where actually they don’t even go to databases. Because there’s a lot of spreadsheets, for example, that contain a lot of good data. And I say I’ll use case, where they were using a spreadsheet, and they were creating graphs on the spreadsheet. They were e-mailing the workbook around, right.
And this is the case, where the user didn’t need to have a lot of knowledge.
But they were able to, like I showed you, copy that spreadsheet onto cargos, Created Data Module off of that spreadsheet.
Just simply expose the spreadsheet, because it had all the metrics and dimensionality already there. And then, he was able to create very nice dashboards of that very simplistic data module, right? The data module only contains one spreadsheet.
And he was able to do that. And as usual, capabilities and training grow, bigger, then, take that data module and add maybe a dimension that’s not there, you know, the spreadsheet has a product ID, for example, but doesn’t have all the other dimensionality.
Well, then you could actually then link that data module to your data warehouse and bring in the product table and then join it inside the data module to get that kind of roll up, Roll up, feature involvement there.
So, definitely training.
There’re some simplistic use cases. As the end user’s knowledge grows, they’ll keep expanding upon that.
And new data models can feed off data models. Data models can feed.
We didn’t we didn’t talk a lot about datasets, but off of datasets, which can be created by end users, so it’s key to be there as narrow or as broad as the end user wants to be able to get to in terms of using all the capabilities.
Yeah, and it’s certainly data modules were or I think it’s safe to say definitely born out of a response to the market presence of products like Tableau, and click and Power BI that really put that in the hands of end users. And so, it’s designed to be pretty easy to do that sort of basic stuff that Pedro was talking about. And it’s certainly, I would say, light years significantly easier.
Then say, framework Manager, right? So, it’s definitely meant to be easier to use. So probably, I would say, easy to use as a for basic stuff, but then definitely training to take advantage of the more sophisticated capabilities of the product, which are definitely evolving over time rapidly.
Can you bring single or multiple packages and join with the data modules and or with Excel spreadsheets?
And I think you kind of you obviously demonstrated you could bring in an Excel spreadsheet but what about multiples of other things?
I mean, you can. And I would lead that into another discussion.
I think we have a couple of webinars on that. I really liked.
Yeah, datasets where we can leverage instead of maybe if you have to look at the use case, if the data in the FM package is very large, for example. But you’ve done your analytics, and you only need a slice of that fact table or that dimension that’s embedded in that FN package. And it’s relatively, you know, not as large. So, for example, you might have a, you know, a billion row, invoice, line table, or whatever, but you only need the last six months or the last year at a higher level, right?
You can create a data set off that F and package it gets offline, then bring that into your data module, and you’ll have a very good performance off that dataset now.
Use the FM package as the source for a data set, and as it’s been narrowed and vertically shrunken, because you don’t need access to all that data, right?
You’re not creating a very large report, you’re doing analytics, right?
I would recommend creating a data set off that package, maybe off that dimension, bringing that into your data module, then you’ll have you’ll be able to leverage the high performing nature of datasets and memory capabilities of Cognos.
That’s one example of doing that, but, yes, you could bring multiple packages in. You could link to them, but you gotta remember the lowest common denominator of whose performance is, how fast that package is performing.
If it’s performing poorly with reports, because they’re going, it’s very large tables, it’s not going to magically perform faster in the data module.
And you’d have to think about your use case, your analytical use case. And maybe you don’t, like I said, you don’t need all that data, in fact, all the reports or you’re doing, are you really filtering things down? You don’t need access to all the data. You just take a slice of a very good candidate for a dataset to be put into a data module.
I think you answered the next question, which was about what’s the performance like, right? And that’s mm or, whatever.
The performance is, really, out of the package is going to kind of manifest itself in the data module, and then if you start adding things to it could probably expect potential degradation, Bhutan depending on how Well you You model that source.
Someone asked about webinars about datasets.
I actually don’t think we’ve done any specific webinars on datasets, but again, you can go over there.
And that’s very searchable, check it out. There’s a lot of good information.
They will put that on our radar for something to two hour down the road.
Good question. I mean, we did talk about a little bit on some of the data module. Self or Data Module is a framework manager.
There’s been a lot of the last release, 11.1, release, seven, did a great change in dataset editors, which could be a potential candidate for a webinar.
So, good, good topic.
Well, we are at the top of the hour, so I want to be respectful of everyone’s time here.
First of all, Pedro, if you’re on advanced for the last slide, want to thank our presenter, mister Ining, here for a fabulous presentation on enabling self-service, and thank you, our audience for joining us, taking an hour of your valuable time out of your day to join us here.
And if there’s anything we can help you with Business Analytics YSP, please feel free to reach us at the e-mail. You see at the bottom there. [email protected] You still pick up a phone and there’s the 888 number.
I know mister Felten put a link there in the Chat if you want to speak to him directly about anything you might have.
And thank you for joining us today. We look forward to seeing you on the next Senturus knowledge series event. Thanks to have a great rest of your day.