Cognos Analytics has hundreds of configuration and setting options. Knowing their capabilities and how to fine tune them will help you to maximize the platform’s potential.
During this on-demand webinar, our in-house Cognos expert and practice lead, Todd Schuman shares lots of tips and tricks to improve the performance of your Cognos environment.
Topics covered in this high-performance webinar include
- Hardware and server specifics
- Failover and high availability
- High and low affinity requests
- Interactive Performance Assistant
- Overview of services
- Java heap settings
- IIS configurations
- Non-Cognos related tuning
Practice Lead – Installations, Upgrades and Performance Tuning
Todd has over 20 years of business analytics experience across multiple industries. He also regularly connects with the IBM product development team and has been in the forefront of the Cognos Analytics upgrade and installation process since it debuted.Read more
Q: How should we trim the size of the Content Store database? It seems that not all the report bits get deleted when a report is deleted using the Cognos web portal interface.
A: There are several ways to reduce the size of the Content Store database, check out these resources.
- Creating a content removal content maintenance task – IBM Documentation
- Creating a retention rule update maintenance task – IBM Documentation
- Change the Connection Properties for the Notification Database – IBM Documentation
Q: Our dashboards run slow; do you have suggestions for how to speed them up?
A: Cognos Analytics 11.2 features dashboard loading times up to 800% faster than 11.0 and 11.1. Outside of that, data sets will be your best option to pre-load data to drive faster dashboard performance.
Q: Is there anything we can do to speed up the Cognos to Excel conversion? We see instances of reports running and completing from a SQL viewpoint but then wait a long time for Cognos to generate the Excel output.
A: There are many factors that can cause this to perform slowly. The complexity of the report output, the number of pages and rows, and the file destination can all contribute to a slow file output. If you are still having issues, please contact Senturus for help at [email protected].
Q: What’s the future of Analysis and Query Studios?
A: Both Analysis Studio and Query Studio are deprecated. They are still available in Cognos 11.2.x, but only for on-premise installations, and only when using Firefox ESR as the browser.
Q: Will Microsoft Edge ever support Query Studio?
A: This is extremely unlikely. IBM has ceased new development for Analysis Studio and Query Studio. Going forward, you should expect that Firefox ESR will be the only supported browser for these studios.
Q: Usually our reports are significantly faster than dashboards accessing same data. We are on Cognos 11.1.2. Did dashboards have performance improvements in newer versions?
A: Yes, dashboards in 11.2 are up to 800% faster.
Q: What is the future of Event Studio?
A: Unlike Query Studio and Analysis Studio, it has not been deprecated, but we are still waiting on a UI overhaul to coincide with the current UI of Cognos 11.
Q: How do we deal with JVM not releasing RAM?
A: It is most likely related to Garbage Collection. Please contact Senturus at [email protected] if you need help tuning this.
Q: Should Garbage Collection be generational or balanced?
A: We recommend leaving it set to generational unless you have a specific need.
Q: Will changing/increasing the Java heap settings affect performance for non-DQM reports?
A: Not really, but a poorly tuned Java heap in Cognos can have negative effects on the overall environment.
Q: How can I access IPA?
A: Under the run option in the reporting tool, there will be a slider to enable this feature.
Q: Is there any documentation for routing rules?
A: Information on routing rules can be found in Chapter 5 of the Cognos Administration and Security Guide.
Q: Reports that typically take 2-3 minutes to run take 12 hours until a “rolling restart” is done. Somehow that clears everything out, so the system is operational until later in the month when reports take a long time again. What is this report server dispatcher restart doing and is there anything that can be done to prevent the long report time?
A: This sounds like old processes are not being killed and they accumulate until the performance completely shuts down. A deeper analysis of your environment is recommended. Please contact Senturus at [email protected] for assistance.
Q: Is there any way to monitor data set size and storage?
A: Right now, the properties of the data set is the only way to view this information.
Q: Can Cognos report output of Excel and PDF be stored in a separate database rather than inside the Content Store database?
A: Yes. We highly recommend setting up a Notification Database to separate your outputs and schedules/jobs.
Q: How do we copy/paste report specs and preview data in Event Studio with Edge? We could in IE and miss it.
A: This is a limitation of the old Cognos UI not being supported in the newer browsers. Using IE is the only option right now.
Q: How can we put data from report queries in a dataset in 11.1.7?
A: In Cognos 11.1.7 you can copy and paste queries from your report directly into a dataset and load that data into memory. You can then update the report to point to the dataset as its source. This should remove any dependencies on the database and show an immediate performance increase.
Q: Is that 32GB RAM recommendation for a virtual or non-virtual environment?
A: That recommendation is for both.
Q: How can we tune Cognos to increase performance when using TM1 cubes?
A: TM1 tuning is a different type of tuning and mostly related to TM1/Planning Analytics and not so much Cognos Analytics. Please contact Senturus at [email protected] if you need help tuning your TM1/Planning cubes.
Q: Is it be possible to install Cognos on the cloud?
A: Cognos can be installed on any server that has a supported OS.
Q: How can we resolve CCL-BIT-0005 Socket errors?
A: We need more information to assist with this error, contact us at [email protected] with the details.
Q: When the Cognos Content Store switches from primary to secondary, we have a lag/unavailability of application for 5 minutes, is this normal?
A: There is a lag during the failover. Five minutes might be on the high end of that lag. If you need any assistance, contact us at [email protected].
Q: What is the benefit of moving from CQM to DQM? What is the risk of staying in CQM?
A: DQM can take advantage of 64bit architecture and provides better performance due to memory caching. IBM has no plans to depreciate CQM but it has not invested in this older technology in many years.
Q: Is Report Studio a good tool to replace Analysis Studio?
A: Analysis Studio was supported to replace Powerplay but lacked some features and usability. It still worked well enough to interact with cubes in a similar way. Reporting can simulate a lot of the same functionality as Analysis Studio, but there is still a learning curve and some of the functionality requires different approaches. Setting the reporting tool to run in preview mode and to show members in the metadata are a good way to help move users to this tool.
Welcome to today’s Senturus webinar on Cognos Performance: Tuning Tips and Tricks. Thank you for joining us today.
I’m happy to have everyone here. Before we get started, just a little bit of housekeeping.
First of all, on your GoToWebinar control panel, to the right side of your screen, you’ll find a panel where you can submit questions, and, there’s also a chat there, But, if you have a question specifically, please put it into the questions panel rather than the chat, and we will try to address some of the questions as the webinar is going on. We’ll also have a Q&A session at the very end to go into more detail on those questions, and if we aren’t able to get to your question during the time waiting in today’s webinar, all of the responses will be posted to our website.
In order to obtain the deck from today’s session, you can go to the Senturus website, senturus.com/resources. The deck is not up there yet, but it should be there by the end of the webinar. And you’ll see a note in the chat when it is available. Just remember to go to senturus.com/resources to download a copy of the deck.
Terms of today’s agenda, quickly, we’ll do some quick introductions, go through some details of hardware and service specifics.
Architectural considerations can then dig down a little more fine grained, two tuning of dispatchers, tuning on reports, and a handful of other useful tips related to Cognos performance.
Then, we’ll wrap up with a quick overview of Senturus and where you can get additional resources on our website, and again, we’ll have a Q&A session at the end of the webinar.
Our presenter today is Todd Schuman. He’s our Practice Lead for Installations Upgrades and Optimization here at Senturus who lives in Northern Virginia with the family. He’s, our Northern Virginia wine expert, and has been involved in business analytics for over 20 years. He’s been involved with Cognos pretty much from the beginning so he really knows his way around installations, upgrades, performance. He’ll be delivering the meat of today’s webinar.
I’m Steve Reed-Pitman, I’m Director of Enterprise Architecture and Engineering here at Senturus.
Before I turn it over to Todd, we just want to do a couple of quick polls. So, first one is we just want to get a sense of what version of Cognos everybody out there is using today so we kick this off, and I’ll let that run, for, maybe, 30 second or so, for everybody to get in and just share what version you’re on today.
I’m seeing a lot of Cognos 11. 1. X coming in which is pretty common.
No, I’m not seeing any Cognos 10, which is interesting. We periodically find a few customers who are still running Cognos 10.
Even recently, we had an inquiry from an organization that’s still on a version of Cognos 8, which was pretty amazing, because it’s been a long time since Cognos 8x was used.
So, I’m going to let this run just for a few more seconds.
It looks like we’ve got responses from most of the group.
So, I’m going to go ahead and close that out.
Let’s share the results here, real quickly.
So, again, nobody on Cognos 10 and, actually, very few of you are on the early versions of Cognos. The bulk of your onto 11. 1, or greater, and almost 20% of you are already moved onto Cognos 11.2.
So, that’s great to hear. I know a lot of folks who have been really, really happy with the new interface.
So, let’s go on to the next poll before I hand it over to Todd. Our next poll is about the performance pain points you’re experiencing today in your Cognos environment. And this is a multiple choice poll.
So, you can choose as many of these as are applicable your environment.
Got quite a few responses coming in already.
I see a lot of slow running reports, and quite a bit of system instability coming in, so, that’s pretty common.
And Todd is going to talk in some detail about some of the things you can do in terms of stability, particularly around memory and Cognos admin tuning settings that can help.
So, I’m going to leave this run just for a couple more seconds.
Here are the results, a lot of slow running reports.
Some, report failures.
Quite a bit of system instability and, administration is confusing, which has been true from day one, and I predict that it will be true for, as long as Cognos exists. So, with that, I’m going to go ahead and hide those results, and I’m going to turn it over to Todd for our presentation.
OK, thank you, Steve.
Got a lot to cover today so, I’m going to get going through some of the material to level set with Cognos. I saw a lot of different issues people are having with Cognos.
Unfortunately, you know, there’s not necessarily a silver bullet that’s going to fix everything. There’s lots of different pieces to Cognos and a single, you know, bottleneck anywhere and that can cause major effects.
So, some of the most common software and hardware bottlenecks include, you know, process threading. So, you’ve got, you know, the ability to run reports, but you don’t have your dispatchers tuned to allow them to process that.
But if you’re able to handle that and you’ve got idle CPU and ram available, people are stuck in a waiting queue while those other reports run. So how can up your number of threads you have, so that you could have work or running reports and less waiting?
The other side of the coin is you have too much going on. You’re running out of memory.
If you’re using Dynamic Query mode and JVM, You’ve got little Garbage Collection, that’s constantly being run in and just chewing up a lot of resources on your machine.
So, there’s lots of different things that can cause it outside of that you’ve got a poorly performing database, badly designed models. The report is poorly desired.
There’s just so many different areas that can be causing the issue in Cognos.
So, the best way to troubleshoot that is just look through each point of failure and see, is this the issue if I run the SQL directly gets database? Is it slow? Is Cognos running our report slower, just certain reports?
So, lot of information. I’m going to try to get to as many of these scenarios and touch on these as possible, but, as Steve mentioned, that we will have a Q&A session. So, if I don’t get to your question, feel free to put a note in the Q&A panel and we’ll try to get to it either live in the session, or we’ll post the answers to the Q&A on our website.
Starting off with just hardware and server specifics, I usually like to post these at least every major build.
So, 11.1 had more no prerequisites as far as how much CPU and ram you need to run it, then 11.0 did. And 11.2 has even more than 11.1.
So, it’s pretty easy to find this stuff on the web just search for supported software environments Cognos and they’ve got all the different versions. The major and minor builds by different platforms. So, you got Linux and Windows.
Different operating system shows do all the different databases and drivers and all that good stuff.
So, put a link on here, just in case you want to use this, but you can just search for this as well.
As far as operating systems,this is 11 22.
Three main different OS is there’s different flavors of Linux that you can run. But, essentially these are the four that are typically used, Red Hat went to, and then Windows typically are the ones that we primarily deal with. Windows Server is 2019, which is the latest version of that.
So, again, depending on where, what version you’re on if you’re on 2012, it might be a good time to move to 2019 at some point.
Maybe in 11.3, they’re going to, no,longer supports Windows Server 2012.
So, you’re going to do upgrade, try to get on the latest version as possible just so you don’t get stuck with a supported version of Cognos, but unsupported OS databases. So, again, just quick, clarification. There are two different databases and in Cognos’s world, there’s its own contents or database that is information about the reports, your users, the folders, the models, the security, all that stuff.
It’s basically a database just about Cognos information, which is different than the database that you want to report off of.
So, databases you’re reporting off of, that you want to see in your reports. There’s, well over 100 different databases and versions that it supports. Databases that can contain the Cognos information is limited to just these 4 or 5.
So it got DB two, Informix, SQL Server, Oracle, and Azure SQL databases, which are more or less just cloud based. SQL server instances running in Azure portal.
Those are the ones that you’re limited to when sending of a contents or database, so again, not too much shading, here, they do occasionally know. Remove some of the, the numbers of the versions on the left side of this.
So, if you’re on your SQL 2012, again, Oracle 11 G, those things have been around for a long time.
Now, 10 plus years, it’s time to start making plans to get off those, those older databases.
Then browsers, this is sort of the biggest news, with 11.2 not to use any of our other, what’s new webinars, or any of my webinars about upgrading. The big news is that Internet Explorer is no longer supported at all.
You will need to now switch to Azure Chrome or Firefox, or Safari, those versions are really the only version of the browser that you can use.
For those of you who are still using some of the old legacy studios, like Query Studio, Firefox is your option now.
There may be some kind of extension plugins for Chrome and in Edge that you can get, but can nothing fully supported. If you’re not going to get any help from IBM on those, on those tools. So, if you have moved 11.2 to yet, just make sure you’re prepared to lose IE support, and if your legacy studio is turned on.
Yeah, make sure you have a plan in place for that.
And then, finally, hardware, this is generic IBM sizing from that document I listed the link to.
They recommend for a single environment.
It doesn’t matter how many users they’re saying four cores, 32 gigs of ram, and about 15 plus gigs of ram or disk space to install it absolute attempts space.
Again, you may be able to get by with less than this. But you’re going to want to have probably more than this.
If you’re running, a production environment with a lot of usage, lots of users it’s not necessarily one size fits all.
With virtual machines, some of this stuff gets hidden.
I’ll talk about VMs a little later.
You’ve got a motherboard inside the host machine, the physical machine that’s hosting that virtual machine.
And that’s got multiple sockets on there.
Each socket can have a CPU in HTTP, you can have multiple cores. Each core can have threading. So, it’s just layers upon layers of complexity.
So, I talk about cores. We’re talking about how many cores on a CPU?
The term CPU core gets kind of thrown around loosely, so I just put a couple of slides just to kind of clarify about that. So I want to have four CPUs, 32 gigs of ram. The memory requirements have gone up significantly since 11 0.
And then just space. If you’re using any type of file uploads, you want to kind of allocated about 500 megs or so per user and you can adjust those and cap it at this is how I should have or just disable file upload altogether but, could you allow that? You aren’t sure you have plenty of space.
Going to jump into some architecture setups.
So, with Cognos, there are infinite number of ways you can configure your environment.
There’s obviously, it’s a standard. Let’s put everything on a single box and that works fine.
You can essentially, the main components we’ve got or a gateway, a dispatcher or an application tier and then a Content manager slash content Tier component, the three kind of main components. And, I said, you could put them all on a single box that works fine for a lot of people. Very simple to setup and run Cognos that way.
As you continue to grow, you may want to introduce some load balancing some fail-over.
This slide here is sort of a generic site that’s been around for as long as I’ve been kind of working with Cognos, We’ve got at different tiers.
We’ve got a presentation web tier with the gateway application tier.
We’ve got some dispatcher’s with content managers, got connections to our security Namespace or Active Directory, l-dap. kind of for databases, are reporting database, has got some overlap cubes here.
So, very generic, basic distributed environment here where we put individual components of each Cognos installation on different servers and they can kind of help each other out balance the load, move things different servers if things are busy.
So, this is going to be sort of A, like a next, next level, second stage of your sort of Cognos experiment, if you’re on a single server today. It’s getting kind of busy. You’ve added more resources, it’s still not keeping up.
The next step would be to go what’s called a distributed environment.
And there’s obviously different ways to do that so, we can introduce a thing called fail-over.
The content manager is the brains of Cognos. It works for that content store database. It’s going to have one active content manager in any type of environment.
So, when that is active, we can add additional content managers to our environment.
And they are going to notice that there’s already one active and just sit in a stand-by mode. And they’re going to be constantly just sending small packets of information back and forth, just making sure that Content Manager is still active, and responsive.
If you were to have some server crash or this machine was to go down. This is going to actually become aware and then become an active content manager, and that’s called fail-over. So if you’ve got a Cognos environment that has mission critical information, or you just want to have better stability, this is one option to kind of have a secondary content manager. In that case this can fail, and you would still be up and running with teachers, getting access to the reports, if they need to.
Um, that isn’t the same thing as high availability. This is where we have no single point of failure.
We need data at least two of these Cognos component, and we also need to have a web load balancer outside of our gateway.
So, in this environment I do have fail-over.
But, if this web server was to go down, I don’t have a way for users to login directly, right now, there is no URL for them.
I only had one dispatcher and it went down because I’m still not going to be able to run reports.
So, we’ve got several points of failure in this fail-over environment.
Whereas here with High Availability, I’ve got two gateways. I’ve got multiple dispatchers and on these dispatchers, I’ve also got content managers or CM servers, one running as active one, running a stand-by.
So, any one of these servers can go down, and I’m still up and running. Users aren’t aware that anything has even happened. So, if you do need high availability, you’re going to want to have at least something like this.
You can get away with a minimum of three servers, putting all of them on different pieces. If you’re interested in keeping the server cost down, you can do it about three.
But this is sort of an ideal architecture for that high availability configuration.
OK, getting into the dispatch or tuning. This is again, a ton of information here.
I think there was a lot of people on that poll were saying the administration is confusing, and I definitely agree.
The first time you go into these dispatch or tuning screens, there are about 87 line items.
That is just completely overwhelming.
I don’t know why they make this this huge long list.
I think it would make more sense to group them into services, or just in a better way to kind of make it more usable.
But I’ll go through it. And explaining a little bit helps, at least in my opinion.
Even though there’s 87 different setting, the majority of these are repetitive, so you’ll notice terms like high avail, high affinity, low affinity, number of processes, peak, non-peak, and then the name of the service.
You see the word batches in here? I’ve got metadata, services, batch services down, below. I’ve got reporting services.
So all the main services have a high affinity setting, a low affinity setting. They have a number of processes, settings, they have what’s called peak and non-peak settings.
So you have about 8 or 10, maybe even 12 different variations of the exact same groupings of these just for different services and different variations of these four.
Obviously, there’s some that don’t fall into this bucket, but if you kind of look at all the different services and the different options around them, they kind of repeat themselves.
So once yo understand what these all are and makes a lot more sense, I’ll break those down now.
So, high affinity, essentially, means things that you want a sub second response time on.
Common examples of these are if you run a report in HTML and, you, know, have 20 rows on the first page, and you click page down, you want that page down to happen very quickly and move to the next role.
You don’t want to have to hit page down, and just watch it spin for another, you know, 5, 10 minutes, especially if you’ve got, you know, 20 plus pages, you need to scroll through.
If you do see that you’re, you’re spending a lot down, it probably means you don’t have enough.
High affinity, example, high affinity, available for the reporting service.
Other topics, like a page up, page down, top bottom, run the report, again, returning back to the main portal, and then even sending things out, like if you want to save the report, or e-mail it, or USA.
But those are all things that should be relatively quick for users, and those need to have a high affinity, connection available in order to do that.
The low affinity ones are the ones that you would expect to take longer. I click a report, it’s running, you’re watching that, we’ll spin.
It’s getting the data, and building reports or reporting report processing.
Even in the report authoring tool, when you first open the package, it has to kind of go and scan that package and say, OK, I see you’ve got 10 different namespaces, and inside that, you’ve got folders and query subjects and filters, all that stuff, you know takes a low affinity requests, Query validation. You test your queries, or your calculations. It has to go and look and see what you’re doing, and see what the data types are, those who are going to be things that are low affinity.
And even in the administration tasks, testing your data source connections, making sure the user, the connection strings are correct.
Setting up folders, jobs schedules involves writing back to the contents or database and adding new content in there.
So these are all things that are low affinity. So think about what are the things I would want my users to have, very quick responses on and what are the things that users expect and I would expect to take longer. Those are the low affinities.
Outside of that we’ve got services so the dispatcher is going to start all of these by default. You can turn on and off some of these in the Cognos configuration.
But you can see, here’s a long list of all these different services.
I don’t know how many there are here, but 20 or 30 years or so, and they occasionally add new ones to this list.
But the only ones you really need to concern yourself with are, for tuning are three. So the batch, the report, and a query services.
And just to explain on those, batch services are essentially things that you run in the background.
So, when you have jobs, schedules, go into a report and say, deliver the output of this report in a PDF, and then you send it to me when you’re done, then, you click OK. You’re no longer actively or interactively watching the results of that report come back. It’s now running in the background as soon as it’s available. It will send it to you. That’s going to take up a batch service connection.
Other things like saved outputs, generating saved outputs, are running triggers, All these things kind of happen in the background, and use the batch service connections.
The reporting service is the ones that I go into the report, I click the report, it asked me to pick you now, fiscal year, and the product line I want to see.
And I click OK.
And I watched that wheel spin, and then the results come back and I’m paging up paging down, possibly interacting with, you know, additional filters.
And if you have interactive browser turned on, you know, interaction with report, those are report services.
Then Query Service is using the dynamic query mode, at similar to the report service, but now we’re talking about the DQM engine.
It’s also going to get into your, your Java heap and your JVM, which we’ll also touch on in a little bit.
So these are sort of the three that I target, typically with dispatch or tuning.
I want to look at the report services, the high and low affinities, Bash, and Query Service.
The other one that kind gives you a different variation of all the different tuning settings is peak versus on peak.
Out of the box, these are the starting and ending hours of your peak. So it uses a 24 hour military time clocks are 7 AM is your start date. 18, 100 hours, 6 PM is your end date.
You can tweak this to whatever you want, so if your day ends a little earlier starts later, feel free to change these numbers, and then that way, anytime you see, know, how many connections or High Affinity Connections do I want to have during peak for my report service? How many do I want to have for my batch reporting? And off peak? You can kind of have those hours align with when your users are in the office, when you are actually logging in and running reports.
And no general recommended way to set this up is during peak hours, you want to have people who are actually running reports, interactive with the reporting service, give them the priority, don’t make them wait there, staring at the screen, there, waiting, impatiently.
If you’re giving all the priority to people running schedules in the background, they’re going to get frustrated.
Typically, if you’re not waiting for something, you’re sending e-mails or doing other work and then an e-mail comes through your e-mail from Cognos, it took an extra five minutes then it than expected. Then, they’re going to be fine to say,probably weren’t staring.
So, you want to give the report service the priority during the peak hours, and then, once those peak hours are done through or leaving the day, they’re at home.
Then give all the processing power of the majority of the processing power to the batch service. Let those things run overnight it, and how quickly, you know, those early morning report that people want to see when they first get the work. Sure, they have enough connections, and it’s tune properly, it’s allowed that to happen so that things aren’t just all loaded up when people get into work.
And you have no slots available to get anything done.
Another thing we can do is called routing rules.
This is a very underutilized Cognos feature that I like to always introduce.
You can set this up again in multiple different ways, essentially, you can have multiple, if you have a distributed environment you can have dedicated servers for specific things.
So, one of the initial ways that I’ve seen it used to people who want to convert to TQM and turn on 64 bit report execution mode. That means, you have to have everything running in, and convert it to DQM.
So if you’ve got any and a Transformer Power cubes or you just got some reports or packages that haven’t been converted, you’d have to be able to convert it to 64 X 64 bit execution mode. Unless you’ve got everything running on there.
So if you don’t, you can spin up another dispatcher, have that run in 32 bit mode and have all packages that are not CQL are not that TQM get routed to that second server.
So, it’s one way to have traffic going to the servers that can handle that specific type of, of reporting engine, whether it’s TQM or CQL.
The other commonly used way to set this up is if you’ve got sort of a next licensing agreement, which I’ve seen from time to time, no example of a company that’s got, 10 executives who really want a fast, you know, high powered Cognos environment.
They’ve got eight cores they got, you know, 128 gigs of ram, and they paid for 10 named user licenses, and they want to run it on that fast server.
They also have got have thousands of store owners across the country that they don’t want to pay individual licenses for.
So, they’re going to use unlimited based licensing type of setup, where you’re paying for the hardware, not the users, and they’re going to kind of scale that down. It’s going to be, you know, four cores.
And that can be routed based on AD groups so that the executives are to get that blazing fast. Speed people who are: you’ll run the daily sales report at a Cognos at the different shops across the country.
They can run in the background. They don’t care as much as, you know, that doing all the analytics and slicing dicing the data.
So another good way to do it, and then even just breaking up report and batch services.
You know, if you’re sending out schedules all day long, and deliver and just have schedules running forever, and you can’t really break up batch and report, serves on the same server, and go ahead and have a dedicated reporting server and dedicated batch server, free up all that.
All that space that’s on the reporting server. So that people who are actually interactively running reports aren’t being bogged down or losing resources and have a dedicated server.
Outside of that, just strictly running schedules all day long.
So, again, just a couple of examples of routing rules, but they’re really powerful, very easy to set up, Very easy to configure, and there’s just lots of ways to kind of benefit from those.
A, another topic that, again, is useful for sizing is the concurrent user. This is a very generic rule.
This may or may not be exactly right, but this is what, IBM is going to give you, and lot of companies are going to give you when they ask you how many, concurrent users do you have? And, essentially, it’s going to take 100.
You have 100 users, consider ten of them active at one time as tactile users. one of them is actively, concurrently, You know, using the environment right now. You know, clicking on that report and running it.
So, for example, if you had 4000 name users, you’d expect to have about 40 concurrent users again.
It can be, you know, up and down, depending on the time of day and you know how active your users are, and it’s sort of a very generic not perfect rule, but it’s helpful for sizing. So figuring out your concurrent user number is going to be something that you’re going to want to have for resizing for, sure.
And then finally, you take all those different concepts in the high and low affinity is what service, where we tuning, when speaking, one’s non peak report versus batch, and you’ve got these different groupings.
So here’s one for peak report servicing, so you’ve got high affinity connections, low affinity connections, and the number of processes for that during that period.
So these are just going to repeat for all the different services, the non-peak, the peak and non peak hours, so you kind of find all these in those 87 or whatever it was, you know, dispatcher settings and go and change these to what you need.
So in a sample of all, I got to high affinity, I’ve got eight low affinities and I’ve got two processes for that service.
So, again, I don’t know why they do it this way, it’s confusing to me.
But essentially, you’re going to take the number of connections in our processes and you’re going to multiply it by each of these and that’s sort of how many threads you’re going to be running. At one time. So if you were to kind of be fully burden and allow this to happen with no batch, you’re using about total 20 total threads or connections to support, you know, this disturbing right here.
So I have lots of tuning exercises, I’ve been involved when, people will go and jackup these numbers really high and not really know what’s going on.
And you’ll see, um, typically what happens is if unless you’ve got a really beefy machine, it’s going to get very unstable. The CPU is going to be hitting 100%, I’m going to be chewing up all your memory. And basically, no one’s going to be getting what they want, and that’s because people see, hey, I come everyone’s waiting for report.
I don’t want to make anybody wait, let’s give more connections and more slots. Open them up, and let them run, and kind of just can’t keep up, so it’s much better too.
Have people wait a couple of minutes, or a couple of seconds, hopefully, and have things at least getting moves to the queue and process, and then move on to the next, no, the next person in line, versus just saying, Hey, have at it. Everyone run reports whenever they want, and, and, let’s have Cognos have figure it out. It’s not going to handle that very well.
So, you’re going to want to make sure that you kind of, slowly pop up these numbers if you’re if you’re moving away from the sort of default numbers that come with the Cognos, but, again, don’t really make significant changes until you’ve had time to test these.
Just to a slide here, so, again, this is no underutilized or silver utilized.
So again, if you’re seeing this type of CPU, you know, there’s 2% or very low CPU utilization and you’ve got reports that are, you know, in a waiting status, then you’re a good candidate to want to kind of bump up those numbers. You know. Let’s increase.
The number of connections, possibly increase number of high and low affinities and see how that response versus Here, This is too much is happening to, servers, isn’t able to get anything done. It’s most likely going to come crashing down pretty soon.
So you want to kind of, you get somewhere in the middle.
You don’t want to be at 100%, but you know, upper eighties, you know, is fine as long as you’re not, you know, flatlining at 100% CPU utilization. You’re taking full advantage of the server. So somewhere in between and just kind of monitor or these, you know there’s tools outside of you know just watching it live. You know things like that.
Kind of monitor your settings over time and see what the sweet spot is.
I’m getting into the TQM and the Java heap this. This could be a whole separate webinar and it’s on, there’s a lot here.
Um essentially there’s settings at the very bottom on the dispatcher for tuning the Java heap.
Um there are default settings And then I would usually I usually recommend bumping these up. So the current default setting, I think is 1024 megabytes.
I always pumped up to about four gig, gives you a much bigger window to start with. It’s going to be quick areas that have to kind of ramp up on the fly. So says Cognos is starting up. It’s going to kind of allocated about four gigs for the DB, the JVM that runs the TQM service.
And then, there’s going to be a limit that you can set.
Again, that default, by eight gig, I usually double that as well and get unless you’ve got, you know, major limitations on how much ram you apps, again, 32 gigs of ram as sort of a starting point from Cognos? If you’re using Dynamic Query mode, you’re going to want to pump up the ram and bump up the sort of the Java heap limit to at least this.
There’s a whole other topic I can get into. I’d probably do a blog post or a separate webinar at some point.
Just the JVM Tuning and Garbage Collection. Essentially, you can tune that as well.
There’s ways to kind of monitor your, your Java dump files, and a lot of the job activity, and kind of figure out, you know, what your garbage collection looks like. Essentially, you want to have it.
Um, when the garbage collection occurs, you wanted to be very quick, and you wanted to not be doing the garbage collection frequently. So hopefully it happens, you know, every couple hours, and it happens for like 10 15 seconds if it’s happening every couple of minutes. And it’s taking a couple of minutes at a time, that’s going to cause significant issues in your environment.
So, again, I will probably do something a little bit more on that, which, due to time constraints, I’m not going to have to go into that in deep depth today.
I did put a couple of links out here if you’re interested in learning more. So here’s the DQM Turning Tuning Guide.
It’s about 100 pages of a very light reading.
But it’s got a lot of good information in there as well.
If anyone on the call today or the webinar today is Using Dynamic Cubes is going to be a bit different sizing activity for that is a IBM Red book link here to that.
Dynamics users are a different animal, they are obviously memory intensive.
So, you’ll be looking at sort of different sizing around those if you’re using those.
Another good way to kind of monitor a lot of these Java activities is, it’s a Windows for the Task Manager.
And the standard task manager doesn’t give you much information if you kind of looked at and try to figure out what’s going on and be like, I see there’s some Java stuff running here, but I don’t know what the heck it’s doing, It’s difficult to kind of figure that out.
So if you kind of go into your, details panel are tab of the Task Manager and right click on the, the heading up here. You can say, Select or Hide Columns. And you go ahead and add the image path name and the command line.
That’ll give you a lot more information.
You can actually see what it’s actually doing to figure out, OK, this is the application that’s running Cognos. This is the the JVM Query Engine. This is an actual report that’s running.
You get a lot more information as to, you know, what’s running, how much ram is using, what’s the CPU utilization? It just makes it much easier to kind of look at a live, you know, active Cargoes environment and figure out what’s going on in case you’re having accessibility issues.
It’s helpful to kind of see what’s happening so this is a very easy way to kind of filter out the junk of things outside of Cognos and also focus on, you know, what are these actually using as far as the Java inside of Cognos?
Forth, specific tuning, a different sort of areas outside of, you know, dispatch or tuning. We’ve got your dispatch, it could be totally fine.
And it’s the report, and that’s obviously a big issue, lots of times, You know, it’s just that difficult report, either it’s a ton of data, is reporting off of an untuned database, or profit possibly transactional database. The model itself could be bad. There are lots of different things can go wrong here.
So, report says, forth, specific tuning is going to be, you know, a one-off type of exercise.
Sort of, a lesser known feature that was introduced in 11.1, I believe, for exactly what version.
But, to introduce this, interactive performance analyzer, the IPA, and it’s pretty helpful my opinion, especially, I try to troubleshoot, still running reports. To get to it, you just go to your Run options.
At the very bottom, there’s a kind of, like, you know, run, HTML, PDF, Excel, at the very bottom, the Short Run options, It brings up this sort of window here, and there’s a slider that I kind of called out here, include performance details.
You just check that slider, hit OK, and run it.
It does require it to be run in HTML, and it’ll give you an output.
And I’ll show you next slide what that looks like, and you’ll kind of break down, if you’ve got, yeah.
Less than a cross tab, and visualization, and mapping, or something all on the same page, it’ll show you how long it took to render each one, how long it took to bring back the data for each one.
And also the overall page statistic as two page took, you know, 375 milliseconds and this and that. So it’s, it can be helpful to kind of show you if there’s a specific object on the page or a specific query that’s causing an issue, where do you want to look?
They did recently introduced this for dashboards, however, they made it difficult to find, for some reason there’s a sort of hidden serviceability, pain in dashboards, View Press control plus. In there, it kind of opens up this serviceability Window here, which is helpful for kind of tuning and troubleshooting dashboards.
At the very bottom, there’s a short duration for all of those visualizations, and it kind of gives you that same sort of, little view as to, you know, what, each piece or each widget or so took to render.
So, here’s an example. This is one of the generic, you know, samples from Cognos.
I took it, I ran the IPA on it and really difficult to see where there’s sort of these execution time is how long this table here to talk.
How long this, you know, Pack’s bubble, this column chart, this line chart, you know, what are we looking at as far as execution ties? There’s this little I information icon that you mouse over, it will actually show you.
It says Execution, you actually give the breakout up, you know, here’s how long it took to retrieve the data.
There’s a long it took to render it. And you kind of get that break out as well.
Then find the bottleneck telos, shows you total execution time.
So, again, this is pretty quick report, I think, you know, nine milliseconds, it took the load, but if I wanted to kind of know, get nitpicky, I could start looking at, you know, some of the lesser performing ones, like this published this bubble chart here. This took about almost two seconds, and Alice are looking at, you know, what can be done? Speed this up. Is there anything in here that’s, you know, causing it to be a little bit slower? I can probably ignore, you know, this one, This one? This is less than a second each, so that’s, that’s pretty quick.
Again, if you’ve got slow following reports, very easy to kind of, use this tool to kind of at least figure out where on the report is, an issue.
If it’s the whole report in general, or there’s just things that you can’t seem to really tune, datasets have sort of become the go to get, you know, get out of jail free card, in my opinion.
Um, these Joyed weren’t where I wanted them to be until, I think, 11 1 7 , back, and you know, the old Cognos 8, 10 days, You know, there wasn’t anything like this.
You going to kind of either know that the queue possibly, or, you know, a summary table in the database materialized views, things like that.
Now, in Cognos, especially with 11 1 7 where they added this sort of report interface on top of datasets, you can now take your entire report query with joins, and, you know, all kinds of complexity, drop it right into a dataset, and say, Go ahead and just load this into memory.
And then have your report, you know, driven off that said, at the actual, you know, packages using today, which is probably, you know, a live SQL connection or something like that.
It’s going to be loaded in memory. It’s going to be extremely fast.
And it’s almost like cheating.
So you can just, literally copy, paste it in there, load it into dataset.
You can even schedule it with the Cognos scheduler to say, do I need to see the data to refresh every night or even throughout the day?
Add that report pointed to that, and it’s just a blazing fast, so these, these datasets, if you haven’t used them in the past, they were, you know, helpful. But you were, kind of limited to, basically, just, kind of, pulling in, you know, a flat file, and just loading it in there.
Um, you kind of had to merge them together with data modules.
If you wanted to have multiple datasets now, you can have all kinds of complexity in there, which I think is great.
So, yeah, definitely check out datasets.
Are still reports?
And then going into other tips outside of Cognos, instead of Cognos, dispatcher in port tuning.
I did see a question earlier in the Q and A panel about the cancer database, so this is another area that gets really large over time.
Let’s see, if you’ve been a longtime Cognos user, this thing can get massive.
You know, typically a content manager database is going to start off your notes two gigs or less.
As you continue to use the tool, and people build reports, and run all kinds of stuff out there, it’s going to grow.
And I’ve seen these things get you a massive, like hundreds of gigabytes. And there are way good ways to clean these up.
Lot of them are really easy to use there, right out of the admin tool if you go to the configuration administration.
If you’ve ever done deployments, you’re kind of familiar with it, at least where we’re at. The sort of in for it exports here.
But the sort of Yes, I went to the fifth one from the left.
It’s got different sort of maintenance tasks here.
There’s a consistency check, which is very helpful. This is something that you should probably schedule, you know, weekly or monthly.
What that does is it goes and checks.
It makes sure that people who have profiles and content in your Cognos environment are still with the company.
So, kind of scans and says, OK, I see an accurate profile for John Doe, I look for his Active Directory account. And it doesn’t exist anymore. It’s just, it’s gone. That person has longer here. I’m going to go and clean up all the stuff that he may have created.
And it may have, you know, a bunch of, you know, old PDFs, you know, save outputs that you saved over, time, that are just sitting there idle as taking up, you know, five gig of space and the content storage, clean that up.
Um, Content removal is another one. You can go and say, I want to remove any content that was created and you could say, I think days or months, and you could say, you know, in the last six months, you can do the last one month.
You could do the last five days and have that run on a daily schedule and it’s kind of do a good running, you know, cleanup trails. So everything, you know, after a certain number of days, just gets removed out there because it’s going to be a lot of the stuff that you may not even know is there.
For example, I saw one of those, one of the customers who had a large, kind of our database, we looked at, looked at it and found that that someone had been running about like a 400 page PDF every single day in their eibach, my folders, and had no idea that it was even running.
And there was just hundreds of stacks of versions of this report that we’re just taking up tons of space.
So we ran, A, are content and removal, and we saw the database instantly lose over 100 gigabytes of space.
So it’s just a great thing to, you know, figure out, you know, what is a reasonable amount of time that someone can keep something, or a snapshot of something, even longer than that, it should be probably, you know, saved to the local file system. Or put somewhere else. It’s not, it shouldn’t live in the content store forever.
So definitely check out the content removal. Very similar that, is, that is the retention rules. So you can kind of say, you know, you can only have X amount of versions. Anything beyond that, where clean up?
So, check out some of these scripts, very, you know, user friendly, easy to run, and you could set them up and scheduled them so that they continue to run into the future.
Another nice, lesser known way to tune your kind of sir database is to separate all the, if you’re a heavy schedule, a job, e-mail, blast type customer, that’s constantly running stuff, Separate that all activity out of the kind of database into its own notification database.
So, when you’re in your Cognos configuration.
If you set up an audit or a database, it’s very similar that you kind of in the notifications mail area, you can just right click and add in another database for the notifications, and it will pull all that extra junk out of it, but it’s on database, so that way, you’re kind of our database is going to be less bloated. There’s a lot less for the process.
It’s got its own dedicated notification database for all your schedules and jobs so that can also save you a lot of time and also less strain on your content or database if it’s busy doing other things.
Um, then, finally, just a quick teach here on the migration system. We do have a tool.
If you’re having kind of trouble reviewing your entire environment, figuring out what is just hundreds of reports, are thousands of reports, and you can’t even keep track of, what’s what, We’ve going to talk with our Migration Assistant.
If you’d like to learn more about it, check our website.
And our knowledge base had a couple of demos on that, but it can kind of give you a full inventory, as well as audit usage and give you full lineage and all the information you could possibly want about, you know, your inventory in Cognos can also help with migrations and things like that.
Another area that you can use to kind of clean up your, kind of your database is in the installation directorate and allow people know about this but there’s a bunch of scripts that Cognos provides.
They are in the directory.
I kind of listed here, you just if you’re using a standard installation directory, it’s the program files, IOT and Cognos Analytics, configuration, schemas, content, and then, therefore, the you’ll see there’s like a folder called SQL Server that we look for Oracle DB two, They’ll kind of add their own specific folder, and you’ve got all these different scripts in there, and a lot of them I wouldn’t mess with. But some that are very helpful, especially for size profiling is this size profiling?
Then SQL Server, obviously the name changes a bit for each database, but this will actually, you plan to your Cognos contents or database. You run it is going to give, you a really nice output that shows you counts by all the other types of how many queries the reports you have any reports how many dashboards packages folders users, all the information?
And it’s also going to give you a really nice output of those output.
You know, save files I talked about have any output files or Sage and the kind of sort of database it actually will group them into different sizes like these are with you know less than 500K. These are 1 to 10 meg 10 to 20 then you know up to massive size like over 100 gigs and show you know how many of each type are in each bucket And what the total?
This spaces and all that stuff, so you can really figure out what a mess you have.
Then, take steps to kind of, use those other scripts here. To kind of clean that up and then review and see if it’s, you know, getting it cut down to where you want it to be.
It’s definitely, it between now, as those, and these built-in ones, you can really kind of help to in your database.
There are other areas, besides content, as well, There’s, like, I think it’s there notifications are such that there’s an area specific to schedules and say, the outputs as well that you can use to kind of clear out your schedule, or NC drop You had to, it clear out your, your schedules, do it operates?
There’s a NC drop schedules and another one is not related to tuning, but that’s just a helpful one That’s in here that I Always like to kind of point out is this ad system as a member you’ve ever been locked out of Cognos?
What? The brand new Cognos environment, there’s an everyone role that gets put in a system admin group. And when you set it up the first time you add yourself or your Azure AD grew up in there and you remove the everyone group. So, not everyone has access to the admin area.
If, for some reason, you know, you removed it or you need to get into Cognos and you’ve got locked out districts here. You can just run it against the content store.
It runs in one second. It adds that everyone rolls back into it. Then, you can get back into the admin screens of Cognos and make those changes.
So, just a helpful script here, as well that I, there have had to use from time to time. And, favorite, got kind of, locked out.
Um, virtual machines.
This is the slide I’ve had in my deck for a while, and there was a time when, you know, the traditional architecture was still pretty prevalent.
I can’t even remember the last time I’ve actually worked on a, a physical machine, versus a VM, or, you know, a cloud based server, which, essentially, also, a VM.
The big issue here is that we’ve got, you know, smaller virtual machines can know images running on a larger host machine.
This sounds great and is great for the most part, but there are some things you need to consider, specifically dedicated over allocating those virtual machines, so this happens all too frequently.
I just pick some random numbers here, but assuming that you know this virtual machine is host to you, that’s actually got the hardware, that we want to provide to these smaller chunks, Let’s say we’ve got a total of 20 cores on that and 96 gigs of ram.
People will go ahead and just chunk that up into a bunch of, you know, a core, 32 gigabyte ram VMs, where it looks like we’ve got 32 cores now, 128 gigs of ram. And they will share these resources. And some applications are fine with that. You know, they don’t need it all the time and then they’ll give it up and they’ll pass it on.
Cognos does not share well at all.
It will not respond well in an environment like this, so you always want to make sure you have dedicated resources. Any type of resource pooling and sharing it is disabled for Cognos VMA
It’s going to latch onto that CPU and ram and not let go. And if it does need more and it doesn’t find it, it doesn’t have it, it’s going to be unstable and most likely cause some failures.
On top of that, you know, with a VM, you want to plan for about five, 10% overhead on top of, you know, what?
Well, you’re getting there and also, you know, it’s only as good as the host it’s on.
I’ve done installations and tuning on Cognos Servers that say they’ve got, you know, eight CPUs and tons of ram, and it looks like we’re on like the latest version of Windows and it’s just painfully slow and either simple planning and they said, OK, We’ll move it to the new host machine and you wouldn’t believe the speed difference between the same image, getting moved to a new host.
I mean, it was incredible, the loading screen was taking 30 seconds to even log somebody in on the old one, and with over eight, it was less than a second.
So you can’t really trust the specs that you see.
You know, through the OS, on the VM. It’s going to lie to you. It’s going to be all virtual CPUs, and you don’t really know what you’re working with.
So, could, the issue could be the host, you know, you can have that Cognos department tuned perfectly, the number of years of CPUs, and ram on they’re all good?
It’s just slow as dirt. Most likely, that means it’s the, it’s old hardware, it’s on a, you know, old host machine.
You upgrade that, or you move to a more modern, enterprise level, host machine to kind of get that tip, run the way you want.
Then last but not least, modeling, all that stuff we talked about today, you could have the best hardware, beautiful tuning, everything perfect. If you’ve got, you know, bad data models, that’s going to cause issues, you know? Bad join logic, circular joins, Cartesian joins. It doesn’t matter how well you kind of Senior caucus environment, it’s going to have issues. So you always want to make sure you have a well designed model.
If you’re reporting on a transactional system, that’s also going to be a huge issue.
You know, those systems are, again, for transactions, not for reporting, That’s the whole purpose of the data warehousing and just reporting systems, you don’t want to be reporting off these transactional systems.
So, there’s, you know, ways to kind of handle that.
But, that’s going to be a huge issue that’s going to be something that cargos can get fixed for you.
If you’re interested in this topic more, Patrick Powers our own Patrick Powers, is going to be doing a webinar on that on june 23rd.
So, make sure you sign up for that, check our calendar for upcoming dates.
But, yeah, there’s just a ton of information here. I try to touch on as much as I can, and it got a little bit of time. So I’m going to turn it back over to Steve, real quick, for a couple of slides, and then we’ll turn it back up to you guys for Q&A.
All right, thank you, Todd. Just a couple of quick things here. I just wanted to share the things that we at Senturus can help you with in terms of improving your performance. There are a few different paths we can help out.
We do offer mentoring where you get Todd or another expert, but oftentimes to help mentor 1 to 1 with your staff to to address any performance issues in your environment. We also offer health checks and assessments, which provide a comprehensive overview of your environment and recommendations for how to improve performance. We also have training resources available. We do offer training specifically in Cognos administration that would be useful for your team.
Please reach out to us for more information.
Beyond that, as I mentioned at the beginning of the webinar, you can visit our website for lots of additional resources.
We have log entries, previous webinars, slide decks from all those webinars, just a whole bunch of useful information to help you make better use of your Cognos environments, so, checkouts senturus.com/resources for that.
We have a few upcoming events.
I wanted to share Power BI Power Automate webinar coming up on May 19th. So, if you happen to be operating in a hybrid environment, as I know many of you are nowadays, if you’re interested in power automate, join us on the team. We also have a virtual conference with our partner in Core Time later in May. Check that out.
Last but not least, in June. We have a webinar presented by our own Patrick Powers On data models and data modeling how a good data model makes all the difference in the great to be deployed.
Just a quick bit of information about Senturus if you don’t already know about us we concentrate on beyond Modernizations and migrations across the complete BI stack.
We do provide a full spectrum of services, including training and implementation.
Can become your Cognos, Power BI and Tableau to accelerate bimodal. So, please reach out to us if you have any questions or need help, not only with Cognos but any hybrid BI environment that you’re working.
We have been in business for a long time, 21 plus years here, 1350 plus client, thousands of projects, a lot of experience, and a lot of exposure to. Many of you already are on this webinar.
Happy to have you, as customers, and always happy to onboard new customers and help you out with your environment.
We’re also hiring a quick note here, if you are interested in the possibility of a role here at the Senturus. We’re currently looking for a management consultant and Senior Data warehouse and Cognos BI consultant. You may have the right skill set for that, and the interests to reach out to us at our careers page on its website. You can also, send your resume at [email protected]
And, with that, I want to jump into the Q&A. We’ve got a lot of questions in the questions panel, so I’ll just apologize in advance. I know Todd won’t be able to get through all of these, But we’ll try to answer a few of the most common questions we see here, and, again, there will be a set of answers provided later after the webinar to the questions we’re not able to get to. And if you have a particular question or issue, that’s a little broader than can fit into the question panel, don’t hesitate to reach out to us at [email protected] we’d be happy to set up a free consultation to talk through the questions.
And with that, I’m going to turn it over to Todd, and I know we don’t have too many minutes left here. So take it away.
OK, I see a question from Mike Armstrong about miring dataset, size and storage.
You can, in the properties, kind of see some basic information as to, I think, how long it took to load number of rows and things like that.
So there’s some basic information there.
Outside of that, there’s not a whole lot of tuning available currently with the Cognos Administration as to, you know, look at that stuff.
So I’m hoping that they do add more capabilities around that in the future.
See what else we got here?
Shames had a question about that, the Cognos store database.
It seems that not all of the reports get deleted when a report is deleted using the Cognos web portal, or face.
That may be kind of what I touched on earlier, that we do have those, those cleanup scripts there is, you know, contact removal, consistency checks.
I’d have to see more exactly, what’s your device not getting deleted, but, know, if you do delete something, and run consistency checks, and some of those retention report removal scripts, it should get a lot of the content that’s, that could be, you know, taking up space in your context or down.
So I’d recommend starting with that.
If it’s still not doing what you would think it would do.
Definitely reach out and let us know that you’d probably take a look and see if they figured out what’s going on there.
Someone asked about Analysis Studio, that is still in the products. It is Firefox only 11.2 X. It’s only browser that supports it now.
But it’s been mark for deprecation for a long time now, and they tend to two, was sort of the last fully supported version of that.
So, it’s been, 7 or 8 years to those. Those are tools that need to kind of be replaced.
Again, it’s not exactly a perfect tool in the Cognos 11.
Sweet, that does that, but they’re pretty comparable, you know, with some cross tabs and some of the settings that you can turn on in the reporting tool or just using the dash boarding tool.
They do interact with cubes and things like that.
So, it’s just a little bit of a learning curve, but there are some options around the studios.
Will it be possible to Microsoft Agile Support Queries Studio? I don’t think so. Again, IBM has been trying to kill off that product for a long time now.
I think they know that there’s still people depended upon it, but they don’t want to make it any easier or, um, accessible.
So, again, I think you should probably make plans to address and just bite the bullet on that because it’s not going to be getting any easier as time goes on. It’s the less than I support it.
Events studio, it’s a bit different.
I’m hearing talks that they are possibly going. It’s not it’s not deprecated that is still supported, however, the interface hasn’t been updated since Cognos 10.
Um, I keep hearing rumors that they are going to redesign the whole UI, because that is my opinion, a really nice tool that allows you depend on, it.
Just needs a front end refresh, so, nothing concrete, but I know they do want to.
Eventually, you know, that, the new Congress, I believe it’s called the Glass.
You either want to add that as a sort of more built-in piece of that Cognos 11 front end the glass UI.
So, I would definitely, a blogger post about that by get more information.
Um, are forces significantly faster than dashboards accessing the same data?
We are at 11.2 dashboards have performance. Yes.
So, actually, 11.2 had a major dashboard.
Performance enhancement, Oh, and that’s actually what they did.
But they threw out some number, like 700%, faster gas faster, dashboard loading.
So I think if you’ve migrated to 11.2, you have to run your dashboard one time.
It will load it in a certain way, and then you can save it.
Then that dashboard should instantly load very quickly. Whereas the past, you kind of watch things spin and each widget will kind of tickets times, you know, to display the data.
It comes up very, very quick, NASA if you’re looking to see, you know, dash performance 11.2 to upgrade is going to really pay large dividends just by installing the software system. I already got the routing rules documentation that’s definitely out there, if you Google it. Also, reach out if you have more questions about it, but that’s something, I am a big fan of the routing rules.
I really like to, to kind of create custom solutions with that.
See, access to the IPA, back to that slide, is, under the Run Options.
So, if you are in the report editor, and you click the little play button that says, you know, HTML in HTML or in a PDF, the last option should say Run Options, click on it, and you’ll get this window right here.
And it’s this sort of little slider here.
Include performance details, just turn it on, click OK, run that report in HTML and you’ll get, know, each one of these little execution time boxes underneath all the objects and then a subtotal page, total at the bottom here.
So pretty easy to turn on and it automatically goes away once you exit the report. So you don’t have to worry about it staying in the report, users, seeing these numbers.
Nothing to do, it’s just a session, only specific setting.
Are just a few minutes over here at Todd you know, there’s one question here, Peter asked, about saving report outputs into Senturus upgrade database rather than the new contents, or you speak to that. I know you can save to file system outputs. I don’t file systems.
Yeah, the file system, you could say it too, which is a great way to do it. That way, you know, one, it’s accessible to people outside of Cognos.
You don’t even have to log in and use it many kind of resource at all for users.
You put it on shared, export it to a shared folder that people can access and they can just pick them up there.
But in addition to that, you can set up a notification database, which I kind of touched on towards the end of the deck there.
Notification databases, then going to be separate from your Cognos or database. It’s going to have all those outputs going to have all the schedules.
All that activity store it there, which will make it much cleaner and more efficient in your context or databases have had to deal with. That has helped.
Customers who have schedule, is just running all day long, tons of activity.
Really makes a huge difference to kind of take that all that noise out of the kindness or database and put it in the notification database.
So it sounds like that might be the better solution for, for what you’re asking there.
Great. Thanks. So, I think we should wrap up here, because we are a little over time. If we didn’t get to your question here today, again, through the questions and provide the list of all those out of the attendees after the fact. So, keep an eye out for that. Thank you, Todd, for sharing all of your wisdom today from all these years in the Cognos world, and thank you to all of you for attending. We look forward to having you at a future webinar and, if you’d like to reach out to us with any questions.
Then, if you need any help on your Cognos Performance Tuning needs, you can contact us at Senturus.com or by e-mail or by telephone. So, we’d love to hear from you.
Always happy to help. Always happy to share our experience. So, with that, let’s wrap it up. And have a great rest of the day, everyone.