Forget Watson, Chat GPT, AI…we’ve got Cognos admin guru Todd Schuman! Todd heads up our Cognos installation, upgrades and performance tuning practice.
Catch this on demand webinar when Todd hosted an informal Q&A to help folks in the Cognos community clear up their performance tuning questions. Report failures, system stability issues, crashing dispatchers or any other performance backend issues were all fair game for this Talk with Todd. Tune in to hear what was discussed.
Cognos Installation, Upgrade and Performance Tuning Practice Lead
Todd has over 20 years of business analytics experience across multiple industries. He also regularly connects with the IBM product development team and has been in the forefront of the Cognos Analytics upgrade and installation process since it debuted.Read more
Q: What can we do to help prevent Cognos Analytics reports that get stuck in pending status?
A: The first thing you can do is check the run history for more information. The most typical causes for reports getting stuck are SMTP handoff issue, the job is stuck in NC tables or credentials have expired. You can fix the issue with SMTP (TLS, SSL, userid/pw), run NCDrop.sql scripts or renew credentials.
Q: What’s the best configuration for Cognos Analytics DQM; should we tune one specific server for DQM/query service and route all DQM traffic there? What are the pros and cons?
A: Unfortunately, it depends. However, you can also have 32bit dispatchers run CQM/DQM.
Q: Sometimes our Cognos Analytics is so busy that admin cannot sign into the portal because it takes a long time to do that. Can we do anything (other than restart the server) to resolve this?
A: Since your server is being overused, adjust the high/low affinity requests and connections on batch/report services. In Audit Reports view use over time and see when/why server gets too busy. After getting that information, you’ll be able to troubleshoot.
Q: Our Cognos Analytics reports are running slower than usual, what is the best order for checking the system or reports?
A: The first thing we recommend is to check your Cognos Analytics CPU/RAM use, then check your database CPU/RAM use. Then, use Audit Reports to identify the times of day/week/month when the system is busy. And lastly, check your IPA/logs. The options to fix these include adjusting the dispatcher’s high/low affinity and connections on report/batch services, scheduling reports to run outside the busy DB window and using datasets.
Q: What are best practices around tuning the number of BIBus processes in Cognos Analytics?
A: BIBus is controlled via report/batch service, high/low affinity, number of connections, peak/non-peak times and 2-4 GB RAM per process. Learn more: Chat question: memory limits for 32-bit and 64-bit processes.
Q: How can we identify the limit of the Cognos Analytics report count and schedule jobs on each server?
A: Check the server specifics in Audit Reports, routing rules to force traffic and the dedicated report/batch servers.
Q: What are the advantages of the Cognos Analytics AI assistant?
A: The AI assistant is beneficial to ask questions and discover insights.
Q: Where can we get Cognos Analytics setup information?
A: Using IPPA in IBM Cognos Analytics 11.2.
Q: Have you run into the situation where Cognos Analytics non-root users suddenly lose access to various packages with an intermittently occurring CM-REQ-4011? We receive an error that says, “You do not have permission to access the object “/…/…/model” when running reports. It recurs every few months, despite no permission changes being made. What suggestions do you have to fix it other than restarting the dispatchers?
A: This sounds like an environment specific issue. Contact us if you would like help.
Q: Are there any concerns about going from CQM to DQM? We have upgraded from Cognos Analytics 10 to version 11 and there are some performance concerns. What can we do to improve the performance?
A: Check the timestamp of the dump file creation and try to determine if there is a specific report running (and crashing) at that time. Also check the Framework Manager Governor Settings for DQM.
Q: Our Cognos Analytics Administration Console seems to only load in the morning or evening. When I try to load during regular business hours, I get a proxy error message. Why is this happening and how can I fix it?
A: It sounds like your server is overused during the day. You should lower and adjust your high/low affinity requests.
Q: Is there a way to tell which Cognos Analytics users need to renew their credentials?
A: Not that we know of, but there are SDK scripts that can auto-renew.
Q: When a user runs a report with criteria that pulls a lot of records, how and where in Cognos Analytics can I see what the user selected in the prompts?
A: Try this:
Logging Selected Report Parameters in the Audit DB (ibm.com).
Q: What is IPA in Cognos Analytics? How is it different from the audit extension?
A: IPA is Interactive Performance Analyzer. It’s used to show query and render time for all objects on a page to help find where bottlenecks are in slow running reports. The audit extension is more of an inventory tool.
Q: Is it necessary to restart Cognos Analytics on a regular basis? Especially if we stop the Oracle database nightly to perform a backup. Currently we are not stopping Cognos because it recovers from loss of connection to the Oracle content manager database, but occasionally we notice weirdness with logins that restarting the service corrects.
A: It shouldn’t be required but we have seen organizations that schedule a restart weekly to help free up orphaned processes.
Q: I upload Excel spreadsheets to append (not replace) my data on Cognos Analytics and I have a lot of historical data already on Cognos. When I add a new field to my Excel, it does not let me append the data. What is the solution to this?
A: This sounds like an environment specific issue. Contact us if you would like help.
Q: We schedule reports to run at intervals daily. Frequently, batches will not run as scheduled. After Cognos Analytics services restart, then schedules will run normally. How do we overcome this type of behavior and are there any fixes for them?
A: This sounds like an environment specific issue. Contact us if you would like help.
Q: We have Cognos Analytics 11.1.7 with a load balancer(LB) from AWS ==> 3 Dispatchers. The load balancing is set to Cluster compatible to allow the AWS LB to manage. How are the batch (scheduled) reports load-balanced, and who manages it?
A: Cognos should manage it, but you can use routing rules to direct traffic to specific servers. Contact us if you would like help.
Q: Can we monitor Cognos Analytics with tools like Prometheus and Grafana?
A: We are not familiar with those tools but you should be able to monitor Cognos processes on the server.
Q: We are having issues trying to track down which Cognos Analytics reports use columns changed in the master database. Is third-party software the only option to see this?
A: This sounds like a good fit for the Senturus Migration Assistant. It can give you all the reports that use a specific column (along with many other insights).
Q: Is it possible to prevent the Cognos Analytics report author from overriding the report level query hint (e.g. max row retrieve, local cache, etc.)?
A: We are not aware of a way to do this.
Q: How can we prevent the DQM report to run into a query stuck stage and require XQE temporary file to be generated? Our XQE file grew to over 270 GB in three hours before cloud ops killed query services to remove the huge XQE file. We were using Cognos Analytics 11.2.2 at that time.
A: This sounds like an environment specific issue. Contact us if you would like help.
Q: Which versions of Cognos Analytics are having issues with scheduled reporting? We are planning to upgrade to 11.2.4. Does this issue with schedule reports exist in 11.2.4?
A: No, we are not aware of any issues with scheduling.
Q: Is it standard to stop Cognos Analytics services before adjusting most configurations?
A: Most changes require a server to be stopped and started.
Okay, welcome everybody today’s webinar Talk with Todd.
Cognos Performance Tuning Q&A.
Today’s agenda, we’ll do some quick introductions, get into your questions.
We did receive questions both about a week and a half, two weeks ago, some just a couple days ago.
And then we will probably obviously receive some today during the Q&A panel.
So I’ll try to get them in the order they came in.
And if, like I said, if I don’t get to all of them today, I will try to post a written response on our website.
So don’t worry if I don’t get to you, we will get a response out there.
So just be patient.
We have a lot of questions coming in.
Once we get to the questions, I did want to do if there’s time on the IPPA demo, which is a new tool somewhere to like the zippy analytics that you may have used in the past.
It allows you to get some really specific timelines and performance around specific still running reports.
So I wanted to show that and then we’ll close out with some additional resources.
Touch on Senturus and then you’ll get to some more Q&A the very end.
Quick introduction on myself.
I’m Todd Schuman.
I run the Cognos Optimization Installations optimization practice lead here at Senturus.
I’ve been working with Cognos since about 2000.
I’m going back to the old days of, you know, Cognos impromptu and, you know, staying as current as I can on the new releases.
You know, currently Cognos 12, which I’ll be running some demos from today.
So if you haven’t seen Cognos 12, you get a chance to see that as well.
I was getting to some questions.
These are the ones that kind of came in, you know, 7 or 10 days ago and then I’ll kind of get to as many as I can here.
First one I got is, you know, what can we do to help prevent reports that get stuck in a pending status.
And I usually recommend, you know, let’s to try to get some more information is sort of a generic status.
There’s lots of things that can cause this.
The first thing I would do is you know what is the Cognos run history telling you?
Is it giving you any clue as to where the issue is?
I usually see three kind of main causes for this, the first one being that that mail server issue which lot of organizations run into.
You know Cognos has generated a PDF or an Excel or some kind of report output on a schedule and it’s trying to pass it out to the mail server and there’s something incorrectly configured on that mail server.
So it’s just stuck in a waiting and it hasn’t passed it off, which is going to be related to just kind of getting your mail server connection the SMTP settings correct.
Another issue is jobs can get stuck in the notification tables and depending on how you have that set up, you know you may have split off a separate notification database.
If not, it’s going to be a set of tables in your content store database and those kind of contain all the jobs that calcus has to process.
So if you crashed or things were kind of stuck in limbo in there, you’re going to need to kind of clear them out.
And there’s some scripts that I can show you that will do that.
And then the third sort of major one is just expired credentials you’ll occasionally do to various changes in your environment need to just renew your credentials and it’s very easy to do.
I’ll show that as well.
So again, the first one is this SMTP, Just a couple quick notes on that.
I think I’ve got Cognos here, you know, so in your notification settings and you’re pretty limited on you know what you can configure here.
You’ve got, you know, the mail server, you may have a user ID and password this default sender and then do you want to turn SSL encryption on?
So not a whole lot of settings here.
There are some settings to turn on TLS if you’re using Office 365, it’s required.
There’s also some settings in the dispatchers under the delivery services where you can set the default sender to be you know this and as it has to come from a specific account even if you have the account and password correct.
So again if you’re having issues with your mail server reach out in the chat, we can try to take a look at that offline.
But typically you know like I said it’s very limited what we can do when the Cognos server it’s mostly on the mail server but there are a couple key things that can prevent you from having the schedules not go and it’s kind of get stuck in those waiting queues.
The other one on the list was the Notification store tables.
So there’s scripts in your Cognos installation directory.
This is where I’ve got it.
I’ve got it on E Program Files, IBM Cognos Analytics 12, and in the configuration directory there is, I think it’s in Schemas.
You’ve got a bunch of scripts that live here that can do various things to kind of help administrators.
This one specifically is under Delivery and they break them out into the different databases that it supports.
So if you’re on Oracle, obviously it’s Oracle.
You know DB2DB2.
But I’m in SQL Server so I’m usually running these.
The main one here is this NC DROP.
So if you’ve got a bunch of jobs stuck in your queue that aren’t, you know, going away, you can just grab these really basic scripts here and just run it directly against your notification database.
Again, that you may have that separate.
I do recommend notification database.
If you’ve been getting my webinars, it’s definitely a thing that can help speed up your environment.
Because again, these tables are all going to be part of the standard content store if you don’t specify the separation of them.
And if you are a very, you know, delivery schedule heavy organization, you’re going to be constantly churning and running lots of connections through your contents or database.
So by separating that, it allows the contents or database to focus on just the primary content manager tasks and not to deal with all these notification databases.
So again, run this on the correct database, it’s going to basically drop those tables and when you restart Cognos it’s going to recreate them fresh and you won’t have any of those lingering tasks that are in your in your list there.
And then finally the credentials.
I believe the buttons change around all these versions but I believe it’s under Profile and settings credentials.
There’s just this renew button here so it’s possible your credentials just need to be renewed.
You can just click this button and it’ll update them for you.
So those are sort of the three main causes I would say at that you know can cause these reports to get stuck in a pending or awaiting status.
If it’s none of those, again reach out separately.
It’s just up as a note in the chat.
We could try to, you know, set up a one off to look at something more specific, but there’s so many things that can happen.
I just find these are sort of when I when I run into them, it’s usually one of these three.
And there’s a lot of questions.
You’ll see there’s some overlap on these questions about configurations for DQM and tuning the DQM query, query service and routing traffic.
So the question was, you know, what’s the best configuration for DQM?
Should we tune one specific server for DQM Query service and then route all the traffic there?
What are the pros and cons?
And like almost everything we’re touching today, you know, it depends on many different factors.
You know 32 bit dispatchers can run CQM and DQM, so there’s nothing preventing you from running DQM on a 32 bit server or having, you know, mixed mixing and matching them.
You know, basically with since Cognos 10, you know there’s DQM has been available to run, you know on the same server as CQM.
Only when you change the report execution mode to 100 percent 64 bit, which is where’d my server go.
You know the kind of configuration, there’s this setting here, Report server execution mode and you can make it 3464 bit.
You really don’t get a whole lot in my opinion.
Making this change, you’re just going to basically completely eliminate the ability to run DQM.
I’d say the only real reason to do this if is if you want to just block the ability.
You know, so you’re relatively new Cognos customer.
You don’t have a lot of legacy 32 bit CQM packages and reports running.
You can prevent them from being able to create or build anything by making the 64 bit, but you’re not going to see a whole bunch of performance enhancements or any kind of major benefit by switching these to 64 bit.
So I usually recommend to leave it that way.
It’s obviously much more difficult to migrate all your DQ, I mean all your CQM, DQM.
Especially if you’ve been a long time college customer, you’ve probably got a lot of stuff around there and my usual recommendation is, you know, you don’t have to go in and recreate everything, but all new development I would recommend to use DQM.
So you know as far as tuning that, I mean there’s settings in the in the dispatcher around the query service.
So if you go to your dispatchers and I’ve only got a single one here, but you can go into, you know, the query service and you’ve got you know, specific tuning settings here.
The main ones being, you know these are the two that you know, I typically focus on.
There’s an initial heap size and then the Max heap size.
So I find it helpful to bump this up normally if you’re if you’re using DQM have it start at maybe like 4 gig instead of 1000 Meg and then if you’re you know running a lot you can exceed the default which I think is 8000 you can make it you know 1216.
If you’ve got a really beefy VM with you know 64128 gigs of ram you can make it even bigger obviously.
So these are going to be sort of the main 2 settings for tuning your DQM and your Java heap.
And then you know as far as you know load balancing, you know specific servers, you can do that as well.
I don’t really see a whole lot of benefit of having you know one fully dedicated DQM and one not unless again you’re trying to move everything off CQM and you want you know things to only run in 64 bit mode.
But there’s not really a huge pro and con on separating the DQM in the query service.
I’ll touch on some routing roles.
Another question in a couple slides more about you know load balancing between batch interactive.
But as far as the DQM, again there’s no read to no reason to really separate them.
32 bit mobile handle DQM just fine, there’s not you’re not going to see any major difference by switching to 64 bit execution mode.
So our question is, our Cognos is sometimes so busy that the admin cannot sign into the portal and it takes a long time to do that.
What can we do other than restart the server?
This is another question that came in in several different ways and a pretty common question.
So typically this usually means that the server is over utilized, that it’s not able to handle any additional requests because it’s so busy.
That includes, you know, getting into the admin screens and being able to see what’s going on, which is obviously a big problem.
You know you can’t tune or troubleshoot if you can’t see what’s going on.
So typically the best way to do that is to adjust, you know your high and low affinity requests and the connections on the batch and report services.
So if you’ve been to any of my other webinars in the past, I touch on you know how these work and I’ll go into these a little bit today.
I don’t want to go too deep into this, but essentially you know you have high and low affinity requests.
The low affinity requests are going to be the reports that are running.
So how many reports can I run?
The high affinity requests are things that are you want, you know, split second response times, navigating to the portal, you know, clicking folders, you know, getting through those admin screens, those are high affinity requests to kind of view the different things that are in there.
So most likely there’s not enough connections enough, not enough high affinity requests to be able to see the admin console or be able to get a successful, you know, page to load.
So by lowering the number of connections or lowering the number of low affinity requests you’re going to be able to kind of free up and give the give the server a little bit of breathing room.
Another thing to do is you know if it’s just certain times of the day or certain times of the month, year I like to create audit reports that show usage over time.
You know, I usually like to create, you know, a monthly, a weekly and a daily that just show, you know, spikes and dips in usage.
And you can easily build these, you know, with the audit package, if you assume you had that turned on in your login, you know, usage, you could see, you know, a certain time of the month, maybe the middle of the month or the first or the end the month, close type deals when things are getting really busy and then you kind of know ahead of time, OK, this is when it’s going to get busy and kind of pre actively make some changes, you know, to scale things up, scale things down.
If you have that ability, you can also see, you know, on like a more granular level in a specific day.
You know, it’s quiet until 2:00 PM and then things, you know, get crazy because there’s 35 jobs that all run exactly at 2:00 PM.
So again, it’s going to depend on every single organization how things are set up, You know what’s the activities, what the business needs are.
But by using audit reports and just monitoring the CPU, checking these high and low affinity requests and the connections on specifically the batch and report services, it should help you to, you know, find why is it getting busy and put some governance in place to make sure that it’s not locking you out of important admin screens.
Other ones, you know, similarly related.
Our kind of reports are running slower than usual.
What is the best order for checking the system of reports?
Again, it’s most likely there’s just too many things are running at once.
You need to kind of put some restrictions on the dispatcher and lower the number of requests that it can handle.
So instead of having, you know, 12 reports that running all at the same time and everything runs slow because there’s not enough utilization on the server, it’d be better to, you know, cap it at 8:00.
Yes, you have four people who have to wait for the report to run, but at least it’s going through the queue, it’s not locking up and it just kind of bogging everything down.
So one of the biggest things I see with, you know, unstable environments and reports running slow is that they’ve sort of modified those high and low affinity in the connections to something much higher than that server can handle.
You know, leaving the default, the default one that’s in there is usually a pretty safe bet it won’t get you in too much trouble.
Can you may want to bump it up if you have, you know a lot more resources.
I think the default that Cognos sort of specs for is maybe 2 to 4 CPUs and like 16 or 32 gigs of RAM, which is sort of just like a basic installation.
But if you’ve got a lot more than that, you know you’ve got a 8 CPUs and you know, 120 gigs of RAM, you can obviously bump that up, but you want to be careful how far you bump it up because if you’re taking on way too many reports, nothing’s going to get run.
So typically, you know, check when this is happening, look at the server itself, what’s your CPU, RAM utilization look like.
It’s possible it could not be Cognos.
It could be the database.
You know maybe there’s too many requests going to the database.
If you have the ability to, you know see that information or work with the DVA, you know see what it looks like on their end.
You know how is how are these queries running are they is it handling it fine, Is it is the database overworked And again audit reports are really helpful here as well.
I’m going to kind of do a little bit of a deeper dive on audit reporting in a little bit.
But being able to see, you know, is it, is there a certain time that happens, you know, frequently you’ll be able to kind of see that with some of these audit reports.
You know, is it getting extra busy every day at a certain time or certain day of the month, things like that.
And then these IPA, you know, Internet or interactive performance analyzer and the logs itself can also be helpful.
You know, if there’s errors in there, you can check the Cognos server logs, there’s various logs in the installation directory that may point you in the right direction as well.
So you can kind of go in there and see if there’s anything you know, blaring that you know there’s a major issue or major error which obviously would be sort of a1 off thing you have to deal with.
But it’s sort of a good last resort to check those logs.
So as far as you know tuning these and I mentioned the high low affinity in the connections on the batch report services, we do these call them health checks where we kind of review environments that customers are having issues with and providing recommendations.
And we had one just a couple weeks ago, a month ago where one of the reasons why I was so busy was everything was all the schedules were set to run at you know a certain time of the day instead of staggering them out, you know later in the hour or later in the day, the server would just take on like 100 plus reports all at you know 9:00 AM every day and things would just come to a complete halt.
So by kind of moving them out, putting some, you know, some controls on the dispatcher services, it was able to kind of free up a lot of those resources, be able to kind of process through that list and get through it and then be able to handle all the live interactive requests coming in as well.
And then finally, I love data sets.
I talk about them too.
You know, if there’s performance is an issue, data sets are kind of a get out of jail free card, these are great.
You can kind of load these on a schedule, take the database and a lot of the, you know, query plans explain pants out of the equation, any kind of network connectivity, just precast the data into a very fast data set and build reports off of that.
It should be extremely quick and one of the best, you know, things I think in in Cognos 11 that they’ve added these data sets.
You can if you if you get the tools like Tableau or Power BI, you know, it’s like a Tableau hyper extract or an import where you’re going to suck all the data into Cognos itself and it’s just optimized for reporting extremely, extremely fast.
So that’s sort of a last resort because it’s going to be, you know, more specific to probably one report, but you can definitely have them, you know, address multiple reports if you kind of take a step back and look at, you know, here’s all these different reports that are accessing this specific data which seems to be a bottleneck.
Let’s build a data set and you know, maybe tie it together with some data modules and you can kind of build something that’s going to be very, very fast for reporting.
What are best practices around tuning them or BI bus processes?
Again similar questions, you know these are all going to be controlled via the report and batch services in the high and high and low affinities, number of connections you got things like peak and non peak.
The BI boss again is, is going to be the CQM engine.
So you got, you know, 2 to 4 gigs of RAM as sort of a cap, depending on, you know, whether it’s 32 bit or 64 bit operating system, which I believe, you know, it’s pretty rare I see 32 bit anymore.
But I’m sure that they’re out there.
And if you were still running kind of 10, I think someone I saw in one of the polls, one of the Lebanons was running Cognos 8 somehow, which was mind boggling.
But and these are all dispatcher settings that you can get to.
There’s my properties.
You go to tuning, you know, I talk on this in in lots of my other webinars, but you know, there’s 87 or so, you know, different things here, which is very overwhelming.
But as I kind of say, you know, there’s a lot of repetition.
You know you just got these three number of high affinity connections for batch, number of low affinity connections for batch and number of processes for batch.
This is during non peak.
These same three exist again somewhere else, you know during peak and swap out the word batch for report and you’ve got you know 3 for report during non peak, 3 for report during peak.
So it’s very repetitive.
So you know what to look for and you kind of tune these a bit.
You can either lower or increase depending on you know what your utilization looks like.
So again if you’re if you’re looking at your busy time and you go to your server and you see do I have open you know I’ve got pretty low CPU utilization right now and I’ve got you know less than half of my RAM is being used and no one’s, no one’s really on this service is just sort of a test sandbox Senturus server.
But if this was you know peak day and I had all my users in here and I saw things like this.
I mean I’m very underutilized so I can bump things up.
I have enough RAM and a CPU to allow the ability to increase.
You know these.
These settings here, if I come in and I see you know this is at 100% and my RAM is almost, you know, completely topped off, I probably want to scale it back down a bit.
It’s probably too high.
There’s too many things running at once.
So by kind of tuning these different groupings, you can kind of get it so that you know, you could find a happy medium in there so that things are running as best they can without overloading the server.
And you’re also not kind of wasting and running a overpowered server that’s not doing enough to justify the hardware that you’ve got on there.
How can we identify the limit of the report count and schedule jobs on each server?
I wasn’t exactly sure what this question meant, but I’m assuming you know you want to schedule jobs to run on different servers.
So you know, I touched on the audit reports and then routing rules so the audit reports.
Do I have it in here?
There it is, but with routing rules.
If you haven’t heard of these or haven’t used these, these are really useful.
You can again this assumes you have a multi tiered environment so you’ve got at least two dispatchers or more.
But you can go ahead and say you know I want to run and you have three drivers.
You can base it on packages, on groups or on roles and you could say you know if anybody this package I want it to run against this server or this package goes against that server.
Or you could split up in in various different ways and just direct traffic, route traffic to different servers.
And one of the best, the best ways I’ve seen to do this is if you know you’ve got a you know your environment and you can’t just schedule all the jobs, you know off hours non peak hours.
And you can’t just say I’m going to start scheduling all these reports at 6:00 PM and let them run until 6:00 AM.
Your job requires or the organization requires, you know reports getting sent out all the day, all through the day.
There’s no downtime or ability to kind of take that away.
So because of that it’s very, very busy and the people who actually are going in there and trying to run reports are waiting a longer time.
They’re seeing that spinning blue wheel because half for you know a lot of the resources are chewed up by all these things running in the backyard and sending out emails and jobs.
So to kind of offset that you can dedicate a specific server to be, you know, batch mode only and route the traffic to that.
So that it’s if it takes a little bit longer for someone to get an e-mail in their inbox, you know, they’re not going to notice as much as a person who’s sitting there staring at that blue wheel and they’re getting frustrated because it’s taking a long time.
So they by splitting it up, you know, they don’t see any degradation of performance because it’s completely optimized just for that user type and you have all the jobs in the background running and going to a specific service.
So routing rules are going to be a great way to do that.
And you know here’s a sort of a screenshot of a customer we did a little while ago that has this in place.
So you could see I blurred the IP’s but the ones that are kind of highlighted in red here are dedicated batch servers and you could see you know the different report types here you know 153,000 going to the batch view 180,000 and very you know minor report service going through 900 you know two.
Whereas the ones that don’t have the red, you know only one a couple 1000 going through where you have 319,000 you know 325,000 going to report specific services.
So this kind of does that what I’m talking about you’ve got dedicated batch dedicated report and this kind of makes everybody happy without you know causing performance to go down.
So you can see you know with his IP addresses you know what servers are doing what and how many how many processes and how many you know requests for each ones are handling.
So by doing so, you can kind of get a better view of what’s going on with each server, you know what kind of requests is it handling and then if it’s an issue, you can set up routing rules to kind of push it one way or the other.
The question that again, I am probably not the best person to answer this, you know, what are some of the advantages of the AI Assistant.
This is more for, you know, a data scientist type role.
It’s kind of each version of Cognos is more and more, you know, this is sort of the hot new thing.
This artificial intelligence is GBT type things to be able to ask questions, get insights that maybe wouldn’t be obvious to you.
You know it’s kind of the new Cognos 12.
This is front and center baked right into the front here you can just ask questions and things like that.
It can build dashboards, but you can just ask it to, you know, create a dashboard for you, which is kind of cool.
Whether or not you know it’s going to be helpful, I don’t know.
But you can, you know, open up a package.
It can provide some insights.
You can say, build a dashboard, I believe.
I think hopefully this works dashboard.
I know they’ve also added a lot of like natural language to this as well.
So you can you know once you’ve got a like a data set or you know model or something loaded you can ask questions like show me you know year over year analysis of you know the products X in revenue.
And I think it can like build things like that for you.
But right now it’s going and you know just with a quick request it’s built me this three tab dashboard and I don’t know this is anything really helpful here but it’s kind of cool that it did this for me and I can just kind of go through and see you know different things that it thinks may be relevant for you know the quick scan that it did.
So expect to see a lot more of this you know as Cognos 12 you know gets more releases but it’s got value.
Again I just as far as you know I’m more administration report development modelling not so much into this data scientist type role but if you are I know it’s got some value but I’m not really the right person.
I know we have a Cognos 12, you know what’s new webinar coming up in a few weeks and you know that would be a good one to ask a question.
If you’re interested from IBM, we’ll be presenting that they can probably give you a better demo or give you more information about the road map for where this is headed.
But I know it’s got a lot of a lot of hype and this is sort of the new area that you can see here they’re putting this front and forward in Cognos 12.
So it’s the top, top thing here.
It’s also baked in up here so it’s everywhere Okay.
I wanted to just show this performance assistant, product performance assistant which it used to be called IPPA which is built into the reporting tool.
There’s add on in Cognos 12 now called the IPPA which can give you some really cool information if you have store performing reports as to kind of where the issue is.
So one thing to know and this drove me crazy when it first came out, they kind of they teased this as something that was new in Cognos 12.
So there’s a link how to set it up.
It’s not correct.
This is actually the correct link.
It’s not done in the area they tell you this is actually set up different areas.
So they tell you in the first document if you kind of search for it that you need to put it into this configuration area which there’s some you know advanced settings that you can type in here to do different things.
But typically you don’t have to do this.
They’ve had some things that had to be set here in the past like to hide banners and stuff like that.
So this is where they were saying to turn it on.
It’s actually not correct.
It actually it has to go into the dispatcher setting like this disp zippy IPA enable true.
So you have to put that in.
As soon as you put that in, it’s going to create about 10 new tables in your notification database if you have one, otherwise it goes to the content store.
So again, another reason to separate the notification database.
You’re going to have even more tables living in there now if you turn this on, but once you’ve got it on they give you a bunch of packages and built in visualizations.
Let’s see here and some built in reports that are all going to be drilled through related.
So it’s what’s cool is that you can go and see.
Let me run one of my samples here.
It was sales growth year over year.
So this is just one of the standard Cognos reports.
So that the regular IPA has been here for I want to say it since 11 zero.
But you can just check this box under Run options and say include performance details and if you go ahead and run this now it’s HTML only to get this information.
If I run it, it’s going to show me the amount of time it takes to render and query the data for each one of these objects.
So I’ve got you know, three different objects.
I’ve got 2 visualizations and a I guess a list report here.
And you could say I’ve got execution times.
It took about 3 1/2 seconds and add those, it took, you know, to render the chart.
It took, you know, less than half a second to query that data.
It took three seconds.
Down here, you know, .2 seconds to render and .8 seconds to query because it’s using the same data.
And then finally over here I’ve got a list report.
It took, you know, 33 seconds milliseconds to render and it took about 3 1/2 seconds to query this data.
So you got a slow running report.
That’s a good start.
You can kind of see is there a specific object, you know, a list, a cross tab, a visualization on the page, that’s the numbers just jump out of you.
Yeah, you have like 3 1/2 seconds here or something is like 10,000 seconds.
OK, that’s a good place to look.
What makes this even better is you have this page, total page, you know, here’s what the whole page took.
This used to just be a dead link.
Now because I’ve got the IPA enabled, this is a drill through and I can just click on it and it’s going to give me a ton of information that’s even more helpful.
So I can see exactly.
Now you know what all the different services are.
I’ve even got information about the process ID that can ties to the process ID on the server itself.
So I can see this BI Bus service took about 13, I’m sorry, 348 seconds, probably milliseconds.
And I can go and see this process ID 13520.
And here it is so I can actually see the entire process.
You know it tells me exactly what is running on the server itself.
So these are all tied to specific processes on the server and you can see you know based on that you know where the bottleneck hard.
So it’s actually the dataset Java service is taking you know the longest about 6 seconds.
I can drill into this even further and it’s the query service.
I can go into the query services just kind of just keep going down and down and I can see, you know, it’s the JDBC result set that took the longest amount of time.
I’ve got things in here like the expression metadata.
It’ll show me all the different, you know, fields that it’s pulling and where they’re you know, where they’re coming from as far as the model.
So I’ve got, you know, some macros in here and it kind of ranks these base on the time so you can see all the different fields in there.
It’s also got back one page.
I think it’s the physical model.
It’ll show you the tables that it’s going out and getting.
So these are all tables at the actual, you know physical tables and the time it takes you know to access these.
So you can really go down pretty deep into this and see where everything’s at.
You can also look at like a grant chart that shows the entire report itself and everything that it did.
It might be kind of hard to read, it’s a lot of information, but you can see you know the amount of time it takes from beginning to end.
All the different processes that happen, you can filter on them.
You know, you could say I want to see just you know the BI bus service and then here you can kind of mouse over and see you know at first it was running the specifications and getting that, then it was it’s hard to read.
Sending the request to the server, executing the thread, you know, it gives you a very detailed view of everything that it’s doing in order.
So kind of help you find.
You know maybe there’s a something in here that’s really slowing everything down and you can even get you know nice interaction with all these different prompts to focus on specific things like execution, assembly, all the different things that it does the query.
So and this I think is a really nice feature that’s baked in.
Again, you all you have to do is just sort of add that dispatcher setting, go to, you know, import the package that has, you know, the model and the different visualizations in the in the reports.
And then just when you’re ready just turn on that IP and you can just drill into reports.
So if you’ve got reports that have been, you know, driving you crazy and you can’t seem to figure out why, it’s very easy to set this up, my opinion.
And you can really get a good look at you know the explain plan in Cognos and all the different things that it’s doing and try to pinpoint, you know, here’s where I think the issue is so pretty valuable, a nice feature that they added especially if you’re, you know constantly tuning and looking to kind of troubleshoot some of these things.
So jump back to the slides and then get to some additional questions.
And so yeah, make sure you use this link, this deck will be posted.
But this is the this isn’t going to help you.
This is the one you want.
Again, just kind of some screenshots of how to do it.
You can drill into the detail, the different color coding.
The blue is Java near the query engine and then you’ve got green, it’s going to be your BI bus and then yellow is Dataset services.
So you can kind of get a nice color coding as to what components are running, what’s taking up the longest time and how long things are running for okay.
Let me go grab some more questions from today.
Let’s see, I had one that came in that asked.
Have you ever run into a situation where non root users suddenly lose access to various packages with a random intermittent occurring error?
You don’t have permission to the object.
It occurs every few months despite no permission changes being made.
I don’t know if I’ve run into that before.
The only I can think of is if you’re doing deployments from you know, dev, QA, prod, something like that.
It’s possible that when you promote something up it has different permissions.
Other than that I’d have to sort of see exactly what was happening.
I don’t know what that could be.
And then another CQM DQM.
We run both CQM DQM on a 32 bit server which appears to be generating dump files.
Are there any settings or advanced parameters to tweak on dispatchers running in CQM 32 bit execution mode that may reduce DQM triggered dump file generation?
Again, it’s kind of a specific one I will say you know the dump files that you get, they do usually have a time stamp.
I find one of the best ways to sort of figure out why it’s happening is, is to look at the either the audit log or if you have it in your you know prior activities.
Do I still Cognos open?
You know, you can go to past activities if if it’s not too far, you know, go to a specific date and see if you could find something that was running either at or during the time of the crash.
Sometimes it’s just a bad report.
You know, something was just not built correctly.
It’s got, you know, custom SQL.
It’s got, you know, cartesian joins.
It’s a lot of different things that could happen.
If it’s not a specific report, it just seems like it’s more than that.
There are some settings in Framework Manager.
So Framework Manager.
I mean people ask about the DQM conversion.
If you haven’t done it, you’re not familiar.
I mean it’s as simple as where are my properties.
You know, if you have a package that’s currently you know in CQM it’s just this is set to, this is set to compatible mode and all you have to do is just change it to dynamic mode and republish it.
You also need to have a JDBC connection.
That’s really all you have to do to convert to DQM.
You know, republish your package and have that data source that it that it points to.
Make sure that that is set up for JDBC.
That’s all it is.
The real challenge is is just the testing.
It’s much more strict.
There’s certain things that may run fine in CQM that don’t run as well or at all in in DQM.
There are as well some governors in here that you may need to tweak.
If you ever looked at this screen down at the bottom here yeah so this whole last block of stuff here you notice they have the DQM kind of in in parentheses.
These are all specific optimizations or governors for DQM.
So it you may want to look at some of these kind of kind of tweak these a little bit and see if any of these are sort of the issue.
Again, I don’t know exactly what you’re seeing or what’s causing it, but you can change these sort of governors possibly to get some better performance or maybe to you know fix things that are not running and these are sort of the defaults, but these can be easily changed and republished.
So maybe try you know to change one of these republish and see if this fixes anything for you.
But there are some specific DQM governors in here that may or may not you know help fix your issue.
Next question again, but gotcha is going from CQ and DQM we’ve gone from version 10:50.
There are performance concerns.
We attribute these facts to the conversion.
Are there anything which look at a global level.
So basically the same response, you know, you know outside of you know knowing and being able to see the models.
I would check the governors.
I would just kind of look at, you know, just, you know, base created some basic reports out the model, you know, did they run fine?
Is it the report itself?
There’s certain you know, possibly bad design reports that you know could be causing an issue.
It’s hard to, it’s hard to know exactly what the issue is.
But I mean technically, you know, a DQM package should run fine.
There just must be something in there, some logic or the way you modeled it, some bad joins, there’s some bad calculations, something, something either in the model or report that’s causing an issue.
And without, you know being able to kind of do like a deep dive into that, I can’t really think of anything else, anything else offhand that would that would be something that we could use to fix that.
Another question was about.
Affinity settings, I think I kind of covered that.
You know there’s the low and high affinities and low is going to be the ones for reports that they’re slower, you expect them to kind of be a lower, slower response time, whereas high affinity are things like you know, me clicking through here navigating these you know.
So again back to that first question about you know this, this locks up most likely you don’t have any high affinity requests available to let these run.
You should be able to kind of quickly or some quickly navigate through.
I mean sometimes these take a little while.
If you have a lot of stuff in the queue, it has to go and query the content store database and show you, you know what’s in there.
And it’s also possible your content store database is just overwhelmed.
You know you don’t have enough resources to kind of handle all the connections and queries to that.
So if if the screens you know that are not, you know, static like this one or this one are taking a long time to load, you know the statuses or the schedules.
It’s possible that the issue is the content store itself.
You know, do you have a lot of bloat?
Do you have enough connections?
Is the database itself you know properly sized to handle?
You know the amount of usage you have.
And as far as bloat, I went to talk about this as well.
I forgot there’s subscription here.
You can run to clean things up so the consistency check.
If you haven’t heard of this, this will go and scan any profiles that are created by users.
If they no longer exist in your LDAP or Active Directory, it’s going to find a mismatch and you can have it either find them or find it and fix them.
And it’s good to fix that.
And that’s going to remove any kind of old references.
Clean up their my folder so they have a bunch of junk you know in there that’ll clean that up.
These content removals and retention rules are also useful, so most likely you know people don’t set the number of versions, the number of outputs you can create.
So I’ve seen, you know, Cognos shops that have had, you know, a user for 5-10 years who didn’t even know they were doing it.
They were running, you know, 100 page PDF and saving it unlimited versions every day just in the content store database.
And so there was this, you know, huge amount of output that was just stored and just bogging down the contents or database because there was just something running and saving over and over and over again these snapshots of the same report and never being cleaned up.
So you can go and run these scripts and say, you know, remove anything from six months ago or remove anything that’s got more than five versions.
They’re pretty straightforward.
You can kind of go to these different Wizards and just kind of create these scripts and run them on a schedule to kind of keep things from getting too outdated or people stacking up, you know, too many saved output versions.
Those can help a lot.
There’s also a script that can kind of give you, you know, if you’re not sure if that’s if you’re a candidate or something like that, there’s a script in that scripts folder.
I think it’s under content.
Yeah you can get this this profiling scripts and this will basically give you a complete dump of like all the different object types you have.
You know how many reports, how many query reports, how many data source connections.
It’ll also show you the amount of output file outputs that are saved in your kind of database.
So if you do see this being really large or you just need your Dbas complain, you know hey your kind of database is you know 300 gigabytes.
It shouldn’t be that big.
You know, you can run some scripts to clean it up.
So this will kind of tell you what’s out there.
And if it is really large, you can run some of those scripts straight out of here to kind of clean things up.
That will actually really help a lot, I think if you’ve, you know, got a lot of junk in there.
Another question about the admin console seems to only load in the morning or evening.
When I try to regular business hours, I get a proxy error message.
Again, most likely it’s just you’re trying to access it during a busy time and it it’s not doesn’t have enough connections to make that happen.
So I would just say try to tune those dispatcher settings, possibly lower it or increase the number of high affinity connections and lower the low and see if that that helps.
Let’s see another question here about physical machines versus virtual machines for big data.
I don’t know anything specific about big data.
I, you know, physical machines and virtual machines, I feel like the physical machines are becoming more and more rare to see everything’s you know, either in you know like an AWS or Microsoft cloud or you know like a company cloud these days and they’re all VM’s.
I do know, you know there is a little bit of performance loss, you know with those, but nothing substantial.
They’ve come a long way in, you know last 10-15 years I think we used to allocate you know at another like 20% for VM, you know, so you know 1.2 virtual cores would equal you know, one physical core, something like that.
I don’t know if that’s still the case.
I haven’t seen any sort of white papers or documentation around that.
But so there’s big data.
I don’t, I don’t know anything specific about that.
I wouldn’t think there would be any sort of issue in a physical versus virtual, but again, I don’t know that much about that topic.
So I’d have to look into that a little bit more.
If you have more information about that, you know what type of big data or what exactly you’re doing, I could tend to take a look at that.
Okay, let’s go to the Q&A panel now.
Let’s see, I’ve got question about dynamic cubes.
I think cube is slow to reload after new data has been loaded into the tables thing that was loaded it takes 4 to 6 minutes.
After the day is loaded it can take over 30 minutes to reload the cube and the aggregates.
How can I speed this up and make it consistent?
I honestly haven’t used dynamic cubes in several years.
I don’t know exactly why that would happen.
Yeah I know that that tool it hasn’t been deprecated but they haven’t really invested or made any changes or updates to that that tool in a while.
So we kind of stopped using it a bit because it was difficult to administer.
I’m happy to take a look offline, but I don’t know offhand you know what exactly is going on those dynamic cubes are challenging sometimes to tune and to deal with some of the reload and the aggregations on there, they’re a bit confusing.
So if you’d like just reach out and we could try to set up like a one off look at that.
Another question on the audit extensions, these are I think if you attended one of my other webinars I talked about this.
These used to be you know something you had to download from the IBM website and there’s a whole configuration process.
So now they’ve made it a little bit easier.
I believe it’s now where is it just in straight up installation directory samples, audit extensions.
So you don’t have to download it, it’s already here.
There’s this war file that you just have to create with some scripts.
I believe it’s already built now and you can just drop it into your WL dropins folder here.
I think you have to restart.
I think there’s documentation here as well.
If I go to docs, yeah, the whole, the whole setup’s here.
I might do a.
You know, I’ll do like A blog post or video post on how to set this up, but it’s gotten much easier.
This is probably a little bit outdated, but this is what they provide.
It was sort of an unsupported thing that they used to give you.
But this is great because this sort of fills the gap of the regular audit reports and the things that you know, haven’t been run.
So audit reports are going to be on demand.
So if I have 100 reports and only my users have only run 20 of them, and I run the reports to show me, you know, what’s the usage I see, I’m only going to see 20 reports.
I’m not going to know about the other half, it doesn’t live.
It doesn’t exist because it hasn’t never been run.
This is going to kind of give me a full dump of, you know, all the objects, all the reports, all the users.
So even if users haven’t been logged into the system, you know, the accounts are still there in Active Directory.
So you can kind of get a lot more information out of this.
That’s not currently possible with the standard audit report.
So this is sort of the audit extension.
So yeah, I’ll try to do a separate webinar or you know, a blog post on setting this up and just kind of reviewing how it all works.
It’s a bit more involved than just the standard audits where you kind of create a database and turn the logging on this.
There’s some tuning and there’s some things you have to kind of run and schedule.
But it’s not, it’s not terrible and you’ve got, you know, these different deployments that you can just import that have some predefined reports.
But the main one is to kind of get this WAR file published and then to kind of go and make a couple settings changes into the tool itself.
Let’s see if the user ended up running the report with a criteria pulling out their records, how and where in Cognos can you see what the user’s selected in the prompts.
So that one, typically you have to kind of see it on the database.
I know that you can put into the data source connections, let’s say like it was off this great outdoor sales.
I think this is where you put it.
Haven’t done this in a while.
I think you can put some kind of values into these open and closed session commands that that will highlight the parameters.
I’ll have to double check.
It’s been a long time since I’ve done this.
You know, outside of you know being able just to see the query running on the database where it’s usually got, you know, built in where clauses.
You know, where you know fiscal year was 2022 and product line equals, you know camping equipment, something like that.
You know it’s difficult to kind of see that information in Cognos but I know there are ways to do it.
I believe it’s using these session commands, but it’s been a while.
So let me let me research that a little bit and see.
Let’s see someone asking about uploading spreadsheets to append, not replace, my data on Cognos.
When I add a new field to Excel, it does not let me append the data.
What is the solution?
I have a lot of historical data already on Cognos and I haven’t done a whole lot of Excel spreadsheet reporting.
I might have to take the 1 offline as well and see what the deal is with that.
So you’re adding a new field that’s not picking it up.
Yeah, I’ll have to check and see if you know what version you’re on.
Can you put that in the chat as well?
Just so I know it might be something they’ve addressed in a newer version.
If you’re not, you know, current, maybe it’s something was that Level 2 or in Cognos 12?
I’ll have to check and see on that.
Do like one more here.
Then I’ll wrap up with some of the closing slides.
Just join the webinar.
I want to know what version of analytics we are having issues of scheduled reporting.
I don’t know if I understand the question.
Let’s see, Yeah, you might need to just give me a little more information on that one.
I mean all Kylus has been able to support scheduled reporting since, you know, way back reporting at Cognos 8 things like that.
So it’s not anything new, but I think I just understand the phrasing of the question.
So please you could reword that Another question on IPA, how is a different audit extension you kind of talked on that.
You know all extension’s going to give you much more information on just the overall environment itself.
Audit extension is going to give you like specifics like we saw with the IPA, you know, runtimes, you know, explain plans, you know what different processes are running on the back end on the server to kind of get that report out to the user.
The audit extension’s going to give you much more information about, you know, the objects, the capabilities, the users, things like that that aren’t easy to get currently out of the tool.
I’ll do one more here.
Here’s the one.
Is it necessary to restart Cognos on a regular basis, especially if we stop the Oracle database nightly to perform a backup?
We are currently not stopping Cognos because it recovers and loss of connection at Oracle, but the DB notices weirdness with logins and occasionally that restarting the service corrects.
So you shouldn’t have to restart Cognos.
I thought people do, I know they if your Cognos environment is running slowly, you don’t have errors and everything is working correctly, you shouldn’t really have to.
I do know occasionally some report, you know BI processes or you know Java processes can get hung.
Sometimes you know someone will kick off a long job and they’ll cancel it halfway through and sometimes things can get stuck in there, those types of things, you know, if that’s happening a lot then it might make sense to schedule maybe like a weekly restart.
But you know this this one that I, you know that I run in the sandbox here.
Obviously it’s not getting nearly as much uses as you as your environment probably is, but you know it it’ll, it’ll stay up for months at a time with no issues.
So unless there’s something, you know, like you said, we’re just going on the database, you shouldn’t really have to do it.
But it can’t hurt.
I know that people like are very religious about, you know, scheduling a weekly restart on their servers, but you shouldn’t have to do that.
And if you are seeing, you know, things like that, just reach out offline and we could take a look and see if there’s something specific that’s kind of causing, you know, some connections to hang or or some issues with Oracle.
But I don’t know anything offhand that that should have been doing.
Then unless it’s possibly a version driver issue or maybe just incompatibility between the versions, I’d have to kind of see okay.
So there’s a couple more questions.
We only have about 3 more minutes here.
So let me just kind of go through the wrap up slots and then we’ll end it and I will follow up with a posted response to the ones I didn’t get today.
So just a quick plug here, we offer different upgrade support levels plus additional servers to ensure the success of your upgrade.
We do hundreds of upgrades, installations and configurations every year.
So if you need any help definitely reach out.
We’ve also got blogs and on demand webinars and lots of other Cognos tips, so if this was helpful today and you want more, there’s plenty of free information out there.
Just go to Senturus.com and that URL on the screen is Intelligence Cognos.
And there’s dozens and dozens of webinars going back 1015 years and all kinds of different topics that may be helpful.
We’ve got free resources and website in our knowledge center that aren’t comparisons.
We’ve been committed to sharing our BI expertise for over a decade.
Just go to Senturus.com resources.
We’ve got some upcoming events.
The external tools for Power BI desktop that’s Thursday, August 10th.
Chat with Pat working with time and dates in Power BI that’s August 17th.
Power BI Report Builder and Paginated reports August 24th.
And then other chat with Pat, Framework Manager modeling Wednesday, September 20th.
So I know I kind of touched on some of the FM stuff today.
If you are doing DQM conversions that’s going to do sort of a deeper dive into FM modeling.
So that might be you know, good time to, you know have some questions or play with those governors I mentioned and have some followup questions for Pat, he can probably help you with those.
And then just a little bit background Senturus, we concentrate on DI modernizations and migrations across the entire BI stack, provide a full spectrum of BI services, training and power BI, Cognos, Tableau, Python, Azure, proprietary software, Accelerate, bimodal BI and migrations.
We’ve been focused exclusively on business analytics for over 20 years and our team is large enough to meet all of your business analytic needs, yet small enough to provide personal attention and we’re hiring.
If you’re interested in joining us looking for the following positions, you can look at the job descriptions on our website and e-mail, e-mail us your resume at [email protected].
And then finally, I’ve got a bunch of questions in the Q&A.
But if you have anything you wanted to get off your chest or just put in there now, like I said, I will grab them.
I’ll leave it up for about 5 more minutes, but go ahead and put them in there and I will grab that list and as I said, publish something out hopefully, if not end of the week, early next week.
Thanks for everyone for attending and I will see you on the next one.