Using Cognos Analytics Explorations
The Hidden Gem for Surfacing Key Business Performance Drivers
Cognos Analytics Explorations is one of the best features in the Cognos platform and it’s an underused gem. Explorations lets you interact with and explore your data to uncover hidden relationships, find trends and identify business performance drivers.
We discuss and demo how to use the features and functionality of Cognos Analytics Explorations.
In this on-demand webinar, we take you on a test drive. We
- Introduce the Explorations environment
- Show what drives a sample data set
- Examine predictive strength of fields in data
- Show how to use comparisons to handle what-if scenarios
- Work with the Exploration Assistant to ask questions of your data
Armed with the insights Explorations helps surface, you can be proactive and predictive. With it, you can act more quickly to mitigate risk, drive strategy for innovation and improve your customer experience.
Trainer and Consultant
Patrick has 21 years of experience in training, business intelligence and data analytics. He is a Cognos expert whose product experience goes back to version 6. He is also versed in Tableau, Power BI, Actuate, Hyperion and Business Objects. His certifications include multiple programming languages, including Java and C++, and database certification (MS SQL).
Greetings everyone, and welcome to this latest Installation of the Senturus Knowledge Series. Today, we’ll be discussing the topic of using Cognos Analytics Explorations.
Before we get into the core of the presentation, we invite you to use the GoToWebinar control panel to help make the session interactive.
And we generally try to respond to questions while the webinar is in progress, but we will cover them either in the Q&A section at the end.
We encourage you to submit questions via the control panel there. We do mute all the attendees out of consideration for our presenters.
Question we always get is, can I get a copy of the presentation? And the answer is not qualified. Yes.
It is available on senturus.com if you go to the Resources tab and then head to the Resources library or you can use the link that was just posted in the GoToWebinar control panel.
Be sure to bookmark that while you’re there as it has tons of valuable content addressing a wide variety of business analytics topics.
Our agenda today will cover some quick introductions and then, in our main topic of using Cognos Analytics Explorations: We’ll then do a brief overview, then additional resources, and please stick around at the end for the afore mentioned Q &A. Diving a little deeper into what we’ll be covering today.
We’re doing an overview of Explorations and then, using Explorations to see data drivers,
looking at predictive strength, leveraging comparisons, working with the assistant and exporting Explorations to dashboards and reports. By way of introductions, they are so disarmingly attractive that we left our pictures out today so, you could focus on the content. Our main presenter is Patrick Powers.
He is a trainer and consultant with over 20 years of experience in Business Intelligence and Data Analytics. He is a Tableau Certified Associate and Cognos report expert or Cognos expert, generally, whose product experience goes all the way back to version six.
My name is Mike Weinhauer, director here at Senturus and also play the role of MC for our knowledge series.
We always like to get our finger on the pulse of our audience, those of you who’ve been here before know how this works. So I’m going to launch our poll today which is simply how often do you use Explorations? Pretty straightforward, never, occasionally.
Or, frequently, please choose one of the above.
I’ll give you just a few seconds to answer. That should be pretty quick.
I’m going to close it out here and share the results.
So, the, the majority of you haven’t, a full 80%.
A few of you have done so occasionally.
That’s interesting, I would have thought that occasionally a number would have been fair and the lever number would be so large. So, good.
You’re in the right place to learn a little bit more about Explorations and with that, I’m going to hand the floor and the microphone over to Patrick Powers, is yours OK, thank you, Mike. And that poll is actually quite interesting and it’s something that I really want to talk about today.
You know, one of the goals with these webinars is to try to change how you’re doing things with Cognos.
Like Mike mentioned, even though it’s March 295th, this year marks 24 years that I’ve been using the Cognos product. The product itself is over 50 years old.
People don’t often realize that.
And, for a long, long time, you’ve all been using it the same way.
I would throw another poll out there, just to be edge, which would be interesting is How many of you just use Cognos essentially as a data dump tool?
You’re still using it the way it was, underreported, the way it was under version eight.
You’re still using it, like, It’s nothing but a report product and there’s so much more in there today, especially now. We’re up to 11.1.7.
Things have been rapidly improving over the last five years in the product.
Yeah, when you’re talking about a 50 year old product, five years is nothing in the lifespan, but, remember, version 11 came out technically over five years ago.
Now, granted, it was Christmas, so, really, we’re just crossed that five year point.
But five years and Explorations is one of those things where they have continued to add features and functionality.
Our goal with Cognos 11.1 is to really start getting people to self-service. No more writing SQL and reports. No more using it with transactional data sources as data dumps.
We want to provide a way for people to use it.
Like the name says, IBM Cognos Analytics and Explorations are the first step to that to actually analyzing data, not just being a passive recipient of data, not just having a scheduled Excel report that’s emailed to you.
No, Going in there, and really diving deep, finding trends, finding patterns, looking at things that are going on, and starting to understand correlations of data.
No, correlation is not causation.
Of course, not.
But, if we can at least start to uncover our correlations, think about all the things your users are doing right now with Cognos.
I would wager that the majority of you are giving folks, cognitive, or giving folks Excel report data dumps, You’re running a report, and they’re taking it off into another tool, whether that tool is Excel or Tableau, or something else.
And they’re going in there to do their analysis.
And the question that begs is why I’ve got a tool right here?
I’ve got a tool that I’m already paying for that I’m already using, the gosh golly.
Look at this.
So, hey, let’s see what we can do to actually use this for more than that.
No, and Trevor you can actually download the sample from IBM.
I want to hit the question that just came up, know that there needs to be more examples of the new functions from 11. You’re absolutely right.
I think that the more webinars we do like this and the more people get to see it in the real-world.
But, you can get a lot of good samples and a lot of good stuff even directly from the IBM Cognos portal and, the samples have expanded.
Now, before I get into the official demo, let me let me just show something along those lines for you, Trevor.
There are a lot more samples out there now than there were in the past.
There’s, there’s more than just go sales query, go, sales analysis, go data warehouse. There are now samples by business function.
There are data module samples.
There’s a lot more in the newer versions of the samples that come up.
And that is in line with what you’re saying, yeah. And that will go a long way to helping people not use it for data dumps, right.
So, to that end, let’s go ahead and let’s take a look.
Let’s take a look and thank you for that, Joe. You’re absolutely right.
The Accelerator Catalog, we’ve got a lot of videos. We have a YouTube channel gang, shameless plug time, go to our YouTube channel.
And we show a lot of these things.
I personally have done a few videos out there on different stuff for not just cargos but other things as well where you can see some of this in action.
So, yeah, right here.
The Accelerator catalog, the samples, the how to videos.
Unfortunately, a lot of organizations, getting to this point, is a cultural shift and a business shift.
It’s not just about technology, and that that obviously can be frustrating for all of you. Those of you I you know, I know in some ways I’m preaching to the choir.
And that that can be part of the challenge, too, is getting your organization to understand that getting folks, we all know the dirtiest, the most offensive phrase in the English language.
That’s the way we’ve always done it.
Our YouTube channel, Mike, if you wouldn’t mind, throw into our YouTube channel URL, into the chat window for everybody.
That way, we know everybody can have it.
So, let’s take a tour. Let’s take a three hour tour.
Oh, the sea, you’ll start getting rough. The tiny ship will get tossed.
Down here, I’m at the Welcome. I’m going to start an Exploration.
And I am going to pick a data source. Now. You know, as a matter of fact, I think I’ll do it this way. It’ll be faster. So, I’m going to be using some call center data.
And this is data that came from the samples, and I’m going to create an Exploration.
Where do we want to start?
OK, I can start by either typing in what I’m looking for.
I could pick one of these.
These are the fields that it is found that says, hey, this is, these are the fields that I see are the most relevant and most important here.
You can start with one of these or you can type it in.
Now the nice thing about explorations is we’re using that IBM Watson and we’re using the built-in functionality of that.
So, I can type total customers satisfied, Average queue time, and we’re going to see that when we get to the assistant, all right?
We’re going to see that when we get into the assistant later on.
For right now, you know what, Hey Dealer’s Choice Jokers Wilde, show me anything.
So what it’s doing, it’s looking at the statistical relationships between columns in your data.
All right, so it’s literally looking at my data.
It’s figuring out the statistical relationship.
I got to move this over here, my ha!
Don’t you love when that GoToMeeting control panel just kind of gets in the way?
Know, Alan, this is all built-in.
This is, this is stuff that IBM has been building into the product over the last few versions, OK, so there’s nothing you need to do to use this.
That’s the nice part, this is all right there.
Sorry, gang Mai.
Like I said, my control panel gotten my way there, and I ended up click on something I didn’t want to click. So, here we are, It’s gone through.
It’s looked at my data, and it said, hey, based on the data that I see, this is the most important statistical relevance.
All of these relationships are connected to total customer satisfaction.
And, Doug, to your question is, same as Alan know, this is all built-in when I say we’re using, no, this is all the stuff that, that they have integrated the pieces of Watson that they have integrated into Cognos, if you will?
OK, Patrick, if you’re going to answer the questions in line, maybe you might want to repeat the question just so the rest of the audience is clear on what was being asked.
Thank you, Mike. You’re absolutely right. Sorry, I forget not everybody can see them. All right. There’s, I brought up Watson and that seems to have confused a few people.
There’s questions about is needing Watson as a separate tool now, what I’m talking about here, gang, is, I’m talking about the pieces of Watson that IBM as a company has slowly started to integrate into Cognos as a product, OK, So, nothing, You guys don’t need to do anything for this. There’s no additional add ones needed to do what I’m doing today here.
So, statistically speaking, total customer satisfied has the most correlations to other columns of data.
These are the other columns of data and the thickness of the line is representing the strength of that relationship.
It’s representing the strength of that relationship and we see that an agent shift is coming in at 58%, agent shift ID, and agent shift ID. So it’s in there twice, and everything else is sitting around a 30 to 40%.
Now, down the bottom here, there’s a slider.
And if I adjust this slider, I’m going to take this up to 80%, and I’m going to take this up to on the other end, 60%.
Now, I don’t have anything. I lose this.
There is nothing that has a 60 to 80% impact on this.
So I have to bring this back down. I’m going to bring it back down to 40%.
OK, these are the top three things.
These are the top three things that are impacting our total customer satisfaction, the agent and the shift of the agent, and the total number of calls.
All right, total number of calls.
Logically, that would make sense.
And this is what this is about. It’s about logic, right?
Let’s click on that Total Calls.
Here’s where this control panel is, once again, in my way, when I click on Total Calls, it produces some sample visualizations for me.
Here is just a straight card showing me the total number of calls.
Here, we see Predictive strength, and here, we see one that shows talk time and total calls by abandoned calls.
Let’s dig deeper.
Now, I’m in here to deeper.
I’ve got five abandoned calls in total.
I see my talk time totals for this.
And, look at this on the right.
Look at all this on the right.
The details, this is where we’re starting to see some real analysis.
We’re starting to see that real analysis. I do want to address one question. Sorry. I had to move my control panel out of the way, so I’m not seeing all the questions as much.
But I will say that the reason there were two agent shift IDs. So the question. Why are there two agents Shift IDs? That field is present in multiple tables.
So if you had revenue, for example, in three different fact tables in the package you were bringing in, you would see revenue three times if it was an impactor to the data.
OK, so that’s probably the last one I can answer in line gang, because I need the screen real estate.
Having run at this resolution makes it tough for me.
I’m used to running at a nice four K resolution.
So here we see our total calls as a sum.
Our Total Talk, Time is a sum, and we see that reflected against abandoned calls but it’s this details on the side, OK, It’s the details on the side.
And what we see in what you probably have already noticed tock time looks weird All right?
Talk time looks weird because it’s defaulting to an aggregation of some not useful.
So let’s change that.
Let’s change the properties of our talk time.
So I can change the summarization to an average.
Now we’re seeing the average talk time.
And no, I don’t know off the top of my head. I believe this is in seconds.
So we see the total number of calls.
We see how many calls were abandoned, and we see the average talk time on this.
If I click on this number five, down here, the five is showing me are correlations.
What does total abandon call’s correlate to?
It’s allowing me to dive deeper into this.
I can see the correlation between abandonment and total calls, average cost per call, et cetera.
This is probably more insight in 15 minutes.
Then most of you have seen on your data in a long time, right?
This is more than just looking at it at Excel.
OK, how do these correlate?
How are they correlating together? So I’m going to look at this second one.
The correlation of abandonment percentage and the average cost per call, it’s got 96 percent correlation.
So when I drill into that, I’m now looking at predictive strain.
I’m looking at the predictive strength of each field and the correlation. And we’ll get into this deeper in the next demo.
I just want you all to see what’s starting to come up here.
The average cost per call with the abandonment percentage.
What drives that?
What drives us to abandon Nicol what drives us too?
Get rid of it.
And we see a sample visualization down the bottom.
Now, the nice thing is that we can jump back to any of these.
So if I go back to Data, Relationships, I go right back to Data Relationships. These are all still open.
Coming back here.
And from here, I can start a new query.
I can ask a question.
I can edit the original whatever I want.
So if I go back to the original and I get rid of this, hey, let’s take a look at one of the ones it recommended.
Let’s take a look at Q time, what impacts Kew time?
We’re going to see a new diagram with new differences in thicknesses.
And we are going to see the queue time against total calls 47%, 47%.
This is the weight of the total calls against the acute side.
Now, some of these are irrelevant, and you’re probably thinking that, well, is there relevancy to look at first name now, probably not.
Is there a relevancy to age?
Maybe we’d have to dig in deeper. Same thing, what supervisor?
You don’t have to use all of these fields, obviously.
We don’t have to use all of them.
This is just giving us the statistical relevancy and giving us a starting point.
Speaking of that, we can also start from scratch. So we can use this like the dash boarding tool.
So if I pick a new card, a single new card will do comparisons in a separate demo.
I can create my own visualization here.
I can drag fields over.
So if I go to my source, hey, here it is.
Here’s my source. So this is my data module in this case.
I can expand our customer.
I’m going, it’s going to drag State on over to the canvas.
And I’ll end up with a map of the US.
I’m going to expand out from here wireless plan.
I got monthly fee, which is currently an attribute, and so I’m going to change it to a measure.
So I’m going to make this a measure.
And I’m going to make it an average.
OK, you’re going to make it a measure.
And I’m going to format it as currency real quick.
Good there, I’m going to pick USD just to get it the way I want it.
Now, I can drag this to the location color.
So I couldn’t use this for self-exploration.
Try that again.
There we go. Ended up in there twice.
Get rid of one.
So here I can see that my monthly averages ranged from $45 to 49, 87.
I can zoom in on a particular state if I want to.
If I wanted to look and say, Wow, why are Wyoming a main? So, hi, what’s going on, but in this case, I’m going to click on California.
I’m going to right click.
I’m going to go to Show By.
Oh look, here’s the hierarchy of this state and it’s recommending these columns.
So I can change this to city.
I can drag that over and I can zoom in.
And I can see if I have any hotspots.
And I can use this to correlate it, and compare it to my total customer satisfaction.
Is this, so this is opening up more business questions?
Is my customer satisfaction impacted by the average monthly fee?
OK, I have not been looking at the questions for a second.
So I want to take just quick pause.
I don’t think there’s anything really there, Patrick that you have to address right now. Except I don’t see anything either.
There’s just some questions around, did you have to do a lot of, what, what kind of configuration and preparation, nothing.
It’s kind of important that the idea is that you don’t really have to do a lot of preparation, although the, the insights that you’re going to get.
You know, like most things are going to correlate to the quality of the data. Exactly.
Know, now, again, gang, this came from an Excel. Asks, it was turned into a data module.
But, yes, garbage in, garbage out. Right.
And, and that, that, I think, is a challenge that a lot of folks face, is that you’re still trying to do business intelligence against transactional data sources, but, boy, Oh, boy, that could be a Webinar in all into its own right. Then, that’s a different discussion.
And I will, I will address one particular question. There was a question of what version of Cognos am I using? I am in 11.1.7.
So, this is, this is our environment, hey, this is our Cognos environment and it is on 11.1.7. And the question was, is this a dimensional data mart or relational, relational fine? Let us say this is an XL S I’m not using a package either. The question was: what’s packages amazing? I’m using a data module that was created from a.
OK, so there was no, there’s no framework manager modeling. There’s nothing here this.
And that’s the beauty of this.
If you expand out and you start giving people access to data modules, if you start giving people access to upload their own Excel files.
They can start using this tool to dig into their data.
Obviously, you’re not going to give that to everybody, come on, and you know, we know that, you know that, but if we can find those 1 or 2, stewards, those 1 or 2 ambassadors.
There was a question asking how do you show a wow moment to management?
I say, find those 1 or 2 people in your organization who are data junkies who are like the rest of us who enjoy doing this and let him go to town.
They will build you some visualizations, and they will build things that are impressive as heck. I’m going to look at what I did again, in 15 to 20 minutes.
I’ve been able to show you guys something that hopefully, has at least caught your attention, maybe it’s not completely wow.
But I’ve at least got her paying attention.
And that’s a start.
What I showed you earlier, just a minute ago, was the predictive strength card.
This shows us a measure of how relevant our data points are to each other.
We can set that straight.
This will show us our key drivers. So we can see things like, oh, shift ID really isn’t that important.
It just so happens that, because it’s a key in our data, it’s coming up, and it’s showing, it’s showing, or, you know, not, it’s not as important as I thought it was. It’s only a 10% driver forget about it.
I’m looking at the things that are 70, 80, 90% drivers.
Now, these are IBM’s words on this. Cognos analytics uses sophisticated algorithms to deliver, highly interpret, all insights that are based on complex modeling.
And that means it’s doing a lot of background stuff.
Going out there, doing K means, doing clustering, doing these things on, in the back end, to get us to what we need. You don’t have to know which statistical test to run.
Right, I’m not asking you to suddenly become an R expert and understand everything about statistical analysis.
It’s a big black box.
And that’s OK.
Because, think about the user that’s going to be using it. This is going to be Peggy in Finance trying to figure out why last month’s invoices were higher and why the fuel cost, what was impacting fuel costs.
We don’t expect them to be a statistical expert.
We expect them to drag and drop and be able to interpret data.
So, here, this shows us our overall sat.
This was one from an airport, showing us our overall satisfaction that may is made up of signage, security, additional signage, and art on the walls.
These are the things that believe it or not, people are paying attention to, and people are looking at end rating and saying, hey.
How would you rate our airport?
As silly as it is, art security signage.
You’ve ever been to an international airport, you know? That signage is huge, because, gosh, I need to know where the heck I’m going’.
We see the impact, and we see how things are rated.
Let’s take a look at that in ours right now.
So we had a card.
We had a card that showed the predictive strength.
OK, so I’m going back to my explorations icon and I’m going to go to the total abandon calls card.
Again, down the left are things that are not really impacting.
They’re impacting but they’re so low.
These are things that have little impact over on the right, our big, giant orange circle.
Combination of abandonment percentage and cue time, the abandonment percentage and the supervisor.
Now that’s an interesting one there, right?
The supervisor are there supervisors who are encouraging abandonment or are there ones that are enforcing stricter queue times and therefore, making our call satisfaction go down because of that?
And supervisor start date: Are they a newer supervisor who is still very stringent and following the rules? Or are they somebody who’s been here for 20 years and a little more flexible about me?
Yeah, if you got a five minute call, you got a five minute call versus the new kid on the block whose all calls must be none or less.
All right, things, we’d want to dig deeper into things, we’d want to go, when I click on that orange circle, my heat map updates.
We’ll pretend its updating quicker than S, There we go, and we see, are barely our biggest square here, at this abandonment percentage, to this queue time.
And we’ve got some circles down here, this lets us go through different views of this data, we had a heat map.
Here, we see a bubble.
We’re going through different views of that data.
I’m going to go all the way to the very last one.
And, on this, this is my abandonment percentage, by total abandoned call’s, colored by age and size by abandoned calls.
This is my color age.
Who’s willing to sit on the phone, and who isn’t?
And I can dig deeper into that.
If I click on one of those, I get into a much deeper look at it. I see my total abandon calls.
And again, I see additional correlations to this.
So this is this is starting to get pretty interesting stuff here.
Does the age impact the total number of calls?
And we can see, look at this tiny little one up here, this is somebody 57 years of age. They average 6.5 abandoned calls.
The total is two. It’s unusually high.
Unusually high, there is something about people who are if anybody in this class is 57 years of age, No, you apparently don’t like to sit on hold.
You’re more likely to abandon Nicol that’s, that’s what the data’s telling me.
There’s a high correlation, and we see that over here.
We see that in our, in our data, age slightly drives it at 12%, 48 is the most frequently occurring category.
These are the most people to drop.
You’re at 57 most likely to drop.
But the overall frequency is at 48.
Somebody, somebody’s got to admit, this is kind of cool so far, right?
We’re seeing things about data that we haven’t seen before.
Some really neat stuff going on here.
Once again, before I go into the next section, I wanted to see if there’s any questions that are appropriate to answer right now.
I’m going to pick where there’s, there’s a number of questions.
There’s a number of questions around the dataset, and I want to kind of address them, just kind of in general.
There’s been some questions about size of data.
This is a bigger issue.
This is a bigger issue than just this particular webinar.
I could do this with a million row dataset.
I truly could, assuming that my data has been put into a proper dimensional star schema that I’ve got proper indexes on that data, that I’ve got proper face tables, OK, Trying to do this against the transactional with without referential integrity, trying to do it against something that’s got full outer joins. No.
And that’s regardless of whether I’m using exploration or I’m building reports, right, gang?
I would have no trouble bringing up a dataset that was well designed, that was coming from a database that had a true star schema.
OK, I want to stress that the performance of Cognos or any business intelligence tool in general is always going to be driven by the structure and the layout of your data source.
There was another question about whether this could be done from an Excel file directly or from a data module. I could use it directly as a data source. It’s being done as a data module, however, to handle some of the joins and be able to add in some of the calculated fields and things along those lines.
So, while an Excel file can be used for all sorts of things in Cognos 11, using it as a data module gives us the ability to join multiple Excel files, to join it to an existing package, et cetera.
It sounds like I need to do a webinar next on data modules.
Or, one on the difference between a star schema and a transactional source in Cognos?
OK, but, hey, we still got 20 minutes. Let’s finish this one out, right?
We can also do comparisons.
This allows us to put two visualizations side-by-side.
So here in this sample, I am comparing product categories, sales by product categories.
So I’m looking at product categories of coffee versus bakery, versus other things in my business, coffee, beans, et cetera.
Well, in our data, one thing we might want to compare is that monthly fee by state is Californian an anomaly? Is Wyoming an anomaly?
How does it compare two other states?
So, we can see, is there a difference in that monthly average. So, let’s take a look at that.
I’m going to go up here to the top, and I’m going to pick a new card. But this time, I’m going to pick a comparison.
So, essentially, I’m going to get a two column, Dashboard Starting point, OK.
I’m going to go to Column stat, starting point.
If I was doing dashboards, I’ve got two side-by-side if I was doing reports and pick templates side-by-side.
So I’m going to compare the average monthly fee.
In both of these, I’m going to pick a column.
Because I want to compare this side-by-side.
I want to compare this equally, I’m back at my source here.
I’m going to go to customer, and I’m going to put state to the bars of the drop zone.
All right, and I’m going to do that for both.
And I’m going to put State under Local Filters.
So for my right hand side, I’m going to pick Florida.
On my left hand side, my drag stayed over to the local filters again.
I’m going to do California.
So now this one is including California.
This one is including Florida from wireless planned.
I’m going to take that monthly fee, put it into length.
Do that for both.
Look at the skill set needed here.
Look at the skill set.
Anybody in your organization should be able to do what I’m doing right now, it’s really just a matter of dragging and dropping.
You have to have a good source.
You have to have a well put together package, a well put together data module, whatever it is.
To be able to do things, so now what I’m doing is I’m putting plan description onto the color, so this allows me to compare, side by side, two different states by planned.
Is there an impact, is there a difference in the average monthly fee?
And we see, you know what, we’re pretty consistent.
State by state.
We’re pretty consistent.
So if somebody is in market A or in market be, they’re paying the same amount of money.
None of this, we’re charging more because they’re here charging more because they’re there, but let’s look at one of our, of our ones that we, that we had an issue with.
Let’s take a look at one of the ones we had an issue with, so I’m going to, where am I on here?
I’m going to drop this off for a second.
All right, so now I’m looking across the board without a state.
Still pretty much going on here.
Filter, I could drag it to filter both.
So if I wanted to filter by a particular plan, if I wanted to filter both of them equally, I could use that as well.
I can also sync these I can sync the selections, I can sync the access, et cetera.
If there are situations where things aren’t 100% the same not a problem, I can change that.
And down the bottom, I’ve got a nice text table to quickly look at things like, what’s the men? What’s the Mac’s? What’s the average?
And what we do see actually there is a slight difference in average just a very slight difference.
Not a lot, just a little bit and that might help us uncover more data points.
All right, I got two more to go and we got 15 more minutes give or take.
One of the big ones that I want to show you next is the assistant.
Maybe I don’t know what’s in my data.
Maybe I don’t know where to start.
The embedded Assistant helps me with this, it allows me to get quick insights.
It allows me to figure out what is impacting my data, and I don’t know, maybe I think, I know.
So, I’m going to ask you a question.
Show me my product profit. Show me my average call time.
Show me my average income, where income is less than five K.
Tell me my average hospitals stay time, where length of stay is less than three days, OK?
So I can use this for filtering, I can literally type in filtering.
I’ve got two options.
I’ve got a full panel, which gives me just a bunch more text, it helps me, and it helps to try to navigate what we’re doing or the compact paddle. Hey, I know what I’m doing.
Let me just ask a question.
So, it depends on what I’m trying to do, what I’m trying to see.
Let’s take a look at the assistant.
Uh, so on the left here.
I’m going to go to the assistant, and this is going to open the assistant panel.
At the very bottom, the very bottom, moving mic and my GoToMeeting control panel for a second.
We see, ask a question, OK, what impacts?
And notice that it is trying to help me out here, what impacts cuter?
What it’s going to bring me back.
These are the fields that have a great influence on call time in the source, call center.
Also, hey, I found some other sources.
They have Q time reference.
If you want to look at another source, you could.
I’ll try a different question.
Let’s try a different question.
So, agent, what’s going on with my age?
Here, I have fields that are related to an agent.
Sorry, I had dry scroll back up here.
Are fields related to an agent?
Total customer satisfaction, the call date, education.
How does education and an agent play it?
Well, let’s take a look at that.
I can create a dashboard from this.
Look at that.
So, it created a dashboard, an entire dashboard.
From the information about agents, with education being the primary focus.
So, here, I see my agents by name.
And I see their education level.
And over here, I see their start date.
I see the average queue time.
So if I were to pick one.
It changes these as well.
And I could save this dashboard and I could give it off to somebody else.
So, hey, let’s take a look at four different agents.
Here’s their total calls.
Is that when they started?
Here’s their satisfaction level.
Now that, it’s pretty cool.
All I had to do was ask a simple question.
Click one button.
And I got a full dashboard that I could save off and share with somebody else on the Education tab.
Notice, it created two tabs: one for Agent, one for Education.
Here’s my customer satisfaction based on my agent’s education level.
Now they’ve added entitle.
I might pull that out.
Here’s the Q time based on education level. Here’s the talk time.
Trevor, I loved your question earlier about how to show a wow moment to management.
I’d like to think that this is a long way, too generating that. Wow, Moment, and I hope all of you agree.
So, the last thing I want to show you.
Is that we can do this from a dashboard. We can do this from a story.
We can pin these to use in Dashboards or Stories.
So, if I wanted to, I can click on my pins.
Currently, I’ve got a couple of things pinned already.
For example, my talk, Time and total’s, I’ve already got this pin.
And I can go, and I can start a new dashboard.
I’m just going to keep the default one here for the purpose of this.
Go right to my PID exploration, drag it over.
I don’t have to re-invent the wheel.
I can use this and I could use this in a story as well.
I could use any of the ones that I have pinned earlier for my explorations.
I can bring these over, and I can build my own custom dashboard to give to my users, and this would allow me to add in Filters. This would allow me to add in other sources.
This would allow me to do other things, as well.
OK, both there was one question earlier that I want to address, before I wrap my portion of it up here. There was a question that says, hey, could you show us the source?
Close these, man.
Go here to Team content.
Here’s the source file.
There was also a question, can a zip file be uploaded?
Yes, here is call center.zip.
OK, so, I could use call center.zip.
And do it right from there.
So, this is that source, it isn’t a zip file.
I don’t know specifically where this came from any further than that.
This is part of the samples.
It was taken in as a data module.
So, here it is as a data module.
If I were to open it up.
I can see everything that’s going on here, including the Hidden Fields, Abor ID, things along those lines, OK?
All right, at 253, for me.
That is what I have for you.
Patrick doesn’t leave us a ton of time, but we do have a lot of great questions, um, many of which have an answer but many of which are still out there. So I’ll try to get through these quickly, stick around.
I think what you can glean from this is that there’s that explanations are very powerful.
It’s very powerful functionality that leverage the embedded Watson technologies to allow you to look at your data without a whole lot of preparation, run statistical analyzes against it, gain insights, produce visualizations, and then ideally get some Aha moments that you can, that you can leverage.
And those can be leveraged by any of your anybody in your organization without a whole lot of data prep, configuration in Cognos or, even necessarily, you know, they don’t have to be white coat type of data scientists. So, if you’re interested in learning more about this, Patrick has a slide up right now, We are offering a Cognitive Analytics Explorations class taught by Mister Powers.
It’s on March 10th and you can get to it via that link.
In terms of upcoming events, if you want to go to the next slide, we have a Getting Started with just enough data Governance.
That’s going to be, at our usual time, it’s our Thursday 11, 11 0 AM, Pacific, time, two PM, Eastern on January 28th.
You can head over to, again, the Senturus website, two events, and find that and sign up for it.
Real quickly about Senturus, we are the authority and business intelligence concentrating our expertise solely on business intelligence for the depth of knowledge across the entire BI stack.
On the next slide, we are known for providing clarity from the chaos of complex business requirements, disparate and ever increasing data sources, and constantly moving targets and regulatory environments.
We’ve made a name for ourselves based on our strengths at Bridging the Gap between IT and business users, and deliver solutions that give you access to reliable analysis, ready data across your organizations, enabling you to quickly, and easily get answers to the point of impact in the form of the decisions made, and the actions taken.
As you can see here, we have a full spectrum of BI services. Our consultants are leading experts in the field of analytics with years of pragmatic, real-world expertise, and experience advancing the state-of-the-art.
We’re so confident in our team and the Senturus a methodology that we back our projects with a 100% money back guarantee that is unique in the industry.
And we’ve been doing this for a while, we’ve been jet upwards of two decades at this point.
Focus exclusively on business intelligence.
You’ll probably recognize a lot of those clients up there delivering projects across the Fortune 500 down to the mid-market across virtually every line of business functional area, including finance, sales and marketing, manufacturing, operations, HR and IT.
Over 3000 successful projects, our team is both large enough to meet all of your business analytics needs. It’s small enough to provide individual attention.
We do invite you to expand your knowledge, and head over to www.senturus.com/resources.
Well, there’s hundreds of free resources, including this webinar and others on all things, business analytics and business intelligence.
Of course, we’d be remiss if we didn’t bring up our comprehensive BI training in the three major analytics vendors Cognos as well as Power BI and Tableau.
We are ideal for corporations and organizations that are running multiple of those platforms or moving from one to the other featuring tailor group sessions, 1 to 1 or 1 to few mentoring, instructor led online courses, and self-paced e-learning.
We can configure that in any way to meet your organizations training needs.
And then, the last slide here, additional resources, we have hundreds of free resources on our website and have been committed to sharing our BI expertise as we have today, or over a decade.
So, we’ve got about three minutes here, I don’t know, Patrick, if you had a chance to kind of look through some of the remaining questions or new ones that might have popped up there.
I have first off, I’d like to thank Trevor who gave us the great compliment that we are providing the best experts to the best webinars, so, thank you very much for that.
In general, I see that a lot of the questions are around the dataset.
I, without picking a particular specific question. You do need to have a good, solid dataset, and there was a question asking if you can always create a dataset out of a complex package that is an excellent way to do this, take your larger datasets, and slice them up. Like this was a data mart, and slice them up into smaller data modules, so that your business users can do it.
So yeah, you don’t need to use that 10000 table data source for these things.
You can make smaller data modules. There was also a question about if framework bands are still going to be around.
I don’t want to answer that because I’m not IBM. And I don’t think it’s appropriate for us to answer it.
From what we know.
Yes, it’s still going to be around.
But at the same time, I would encourage you to start looking at and leveraging data modules. Not just as a replacement for Framework Manager. But as a tool for your power users.
For the tool, for those folks who do want to be able to do more, And especially your Excel users, OK? Because they’re already typically really sharp, really smart, so why not? Allow them to do some lightweight data module? Why not bring them into the fold and get them as you’re as your cheerleaders?
It doesn’t hurt!
So as far as the future of Framework Manager, I’m not going to comment but I will say yes, you should be looking at data modules and there was another question about data module’s capabilities that is constantly improving.
If you compare early versions of data modules to the 11.1 data modules, it’s night and day.
I personally have taken our Framework Manager class and tried to replicate it in data modules and I’m getting about 80 to 85% right now.
That’s a long way from where it was three years ago when I could barely get 40 or 50%.
OK, Otherwise, I think that’s about all I can answer right now. There is a question on licensing which Mike, you might be better able to handle. Is a standard user analytics license allowed for explorations? Yeah. I saw that and I think IBM really streamline that.
And if you’re an Analytics user, you should have those capabilities, right?
You should have the full capabilities of the platform so don’t quote me on that. You’d want to, you know, ask your IBM rap or whoever handles that in your organization. But I’m pretty sure that’s the way it’s set up.
And 1 last 1. I want to hit Alan. You asked if we can show the data lineage do you know what?
Off the top of my head, I’m actually not sure if I can show the data lineage or not.
But you know what? I’ve got Nan here. I actually have an exploration.
Let’s just open it real quick. If anybody needs to leave, fine people, feel free to leave.
Let me see if I go to my source.
I can go to properties, but I don’t. I know what you mean by the data lineage. I’m not seeing that option to show the full data lineage, and it looks like I would still have to do that in reporting or in a data module itself.
I don’t have that, that standard lineage feature that you’re looking for here on this source.
Do you happen to know Patrick? I said there were a few questions about using T M one or planning analytics. Do you know if that’s supported for exploration? I don’t know. Honestly, I don’t have that with.
But I do know dimensional packages, relational package’s not a problem.
And also there was a question that it works only with data modules. No.
It works with any anything so I could add another source.
So I could add a standard no! Go sales query.
Just real quick, here.
No, I can add a standard go sales query data source in here and do analysis on this just as well.
So, a standard package standard thing going on.
I still don’t have that lineage, though, even with a package that you are looking for, unfortunately, but if you want in planning, I don’t know.
It looks like explorations requires a Cognos Analytics Explorer license so it’s not available for your viewer or cognitive analytics users. So, it may require something, in addition to sort of your, your, your standard user.
Thanks, Carson, for chiming in on that.
So with that, and again, it is the top of the hour, if we can jump to the last slide. Actually, I want to thank my esteemed colleague Patrick for an excellent presentation there.
And thank all of you for taking time out of your busy days, too.
Join us on this Knowledge Series event.
Hope you’re all staying well and safe and healthy and we look forward to seeing you on the next installment of the knowledge is.
If we can help you with anything BI related, feel free to reach out to us via our website. If you actually still use a phone.
There’s AAA number there, or you can always e-mail us at firstname.lastname@example.org.
Thank you very much. And we’ll see you next time.
And the OK, grandpa will all dial the phone today, OK? Thanks, Mike.