Talk with Todd: Cognos Performance Tuning Q&A

Forget Watson, Chat GPT, AI…we’ve got Cognos admin guru Todd Schuman! Todd heads up our Cognos installation, upgrades and performance tuning practice.  

Catch this on demand webinar when Todd hosted an informal Q&A to help folks in the Cognos community clear up their performance tuning questions. Report failures, system stability issues, crashing dispatchers or any other performance backend issues were all fair game for this Talk with Todd. Tune in to hear what was discussed.

Presenter

Todd Schuman
Cognos Installation, Upgrade and Performance Tuning Practice Lead
Senturus, Inc. 

Todd has over 20 years of business analytics experience across multiple industries. He also regularly connects with the IBM product development team and has been in the forefront of the Cognos Analytics upgrade and installation process since it debuted.

Questions log

Q: What can we do to help prevent Cognos Analytics reports that get stuck in pending status?​
​A: The first thing you can do is check the run history for more info​rmation. The most typical causes for reports getting stuck are SMTP handoff issue​, the job is stuck in NC tables​ or credentials have expired. You can fix the issue with SMTP (TLS, SSL, userid/pw)​, run NCDrop.sql scripts​ or renew credentials​.

Q: What’s the best configuration for Cognos Analytics DQM; should we tune one specific server for DQM/query service and route all DQM traffic there? What are the pros and cons?
A: Unfortunately, it depends. However, you can also have 32bit dispatchers run CQM/DQM.

Q: Sometimes our Cognos Analytics is so busy that admin cannot sign into the portal because it takes a long time to do that. Can we do anything (other than restart the server) to resolve this?
A: Since your server is being overused, adjust the high/low affinity requests and connections on batch/report services​. In Audit Reports view use over time and see when/why server gets too busy​. After getting that information, you’ll be able to troubleshoot.

Q: Our Cognos Analytics reports are running slower than usual, what is the best order for checking the system or reports?
A: The first thing we recommend is to check your Cognos Analytics CPU/RAM use, then check your database CPU/RAM use. Then, use Audit Reports to identify the times of day/week/month when the system is busy​. And lastly, check your IPA/logs. The options to fix these include adjusting the dispatcher’s high/low affinity and connections on report/batch services​, scheduling reports to run outside the busy DB window a​nd using datasets.

Q: What are best practices around tuning the number of BIBus processes in Cognos Analytics?
A: BIBus is controlled via report/batch service​, high/low affinity​, number of connections​, peak/non-peak​ times and 2-4 GB RAM per process​. Learn more: Chat question: memory limits for 32-bit and 64-bit processes.

Q: How can we identify the limit of the Cognos Analytics report count and schedule jobs on each server?
A: Check the server specifics in Audit Reports, routing rules to force traffic​ and the dedicated report/batch servers.

Q: What are the advantages of the Cognos Analytics AI assistant?
A: The AI assistant is beneficial to ask questions​ and discover insights​.

Q: Where can we get Cognos Analytics setup information?
A: Using IPPA in IBM Cognos Analytics 11.2.

Q: Have you run into the situation where Cognos Analytics non-root users suddenly lose access to various packages with an intermittently occurring CM-REQ-4011? We receive an error that says, “You do not have permission to access the object “/…/…/model” when running reports. It recurs every few months, despite no permission changes being made. What suggestions do you have to fix it other than restarting the dispatchers?
A: This sounds like an environment specific issue. Contact us if you would like help.

Q: Are there any concerns about going from CQM to DQM? We have upgraded from Cognos Analytics 10 to version 11 and there are some performance concerns. What can we do to improve the performance?
A: Check the timestamp of the dump file creation and try to determine if there is a specific report running (and crashing) at that time.  Also check the Framework Manager Governor Settings for DQM.

Q: Our Cognos Analytics Administration Console seems to only load in the morning or evening. When I try to load during regular business hours, I get a proxy error message. Why is this happening and how can I fix it?
A: It sounds like your server is overused during the day. You should lower and adjust your high/low affinity requests.

Q: Is there a way to tell which Cognos Analytics users need to renew their credentials?
A: Not that we know of, but there are SDK scripts that can auto-renew.

Q: When a user runs a report with criteria that pulls a lot of records, how and where in Cognos Analytics can I see what the user selected in the prompts?
A: Try this:
Logging Selected Report Parameters in the Audit DB (ibm.com).

Q: What is IPA in Cognos Analytics? How is it different from the audit extension?
A: IPA is Interactive Performance Analyzer. It’s used to show query and render time for all objects on a page to help find where bottlenecks are in slow running reports. The audit extension is more of an inventory tool.

Q: Is it necessary to restart Cognos Analytics on a regular basis? Especially if we stop the Oracle database nightly to perform a backup. Currently we are not stopping Cognos because it recovers from loss of connection to the Oracle content manager database, but occasionally we notice weirdness with logins that restarting the service corrects.
A: It shouldn’t be required but we have seen organizations that schedule a restart weekly to help free up orphaned processes.

Q: I upload Excel spreadsheets to append (not replace) my data on Cognos Analytics and I have a lot of historical data already on Cognos. When I add a new field to my Excel, it does not let me append the data. What is the solution to this?
A: This sounds like an environment specific issue. Contact us if you would like help.

Q: We schedule reports to run at intervals daily. Frequently, batches will not run as scheduled. After Cognos Analytics services restart, then schedules will run normally. How do we overcome this type of behavior and are there any fixes for them?
A: This sounds like an environment specific issue. Contact us if you would like help.

Q: We have Cognos Analytics 11.1.7 with a load balancer(LB) from AWS ==> 3 Dispatchers. The load balancing is set to Cluster compatible to allow the AWS LB to manage. How are the batch (scheduled) reports load-balanced, and who manages it?
A: Cognos should manage it, but you can use routing rules to direct traffic to specific servers. Contact us if you would like help.

Q: Can we monitor Cognos Analytics with tools like Prometheus and Grafana?
A: We are not familiar with those tools but you should be able to monitor Cognos processes on the server.

Q: We are having issues trying to track down which Cognos Analytics reports use columns changed in the master database. Is third-party software the only option to see this?
A: This sounds like a good fit for the Senturus Migration Assistant.  It can give you all the reports that use a specific column (along with many other insights).

Q: Is it possible to prevent the Cognos Analytics report author from overriding the report level query hint (e.g. max row retrieve, local cache, etc.)?
A: We are not aware of a way to do this.

Q: How can we prevent the DQM report to run into a query stuck stage and require XQE temporary file to be generated? Our XQE file grew to over 270 GB in three hours before cloud ops killed query services to remove the huge XQE file. We were using Cognos Analytics 11.2.2 at that time.
A: This sounds like an environment specific issue. Contact us if you would like help.

Q: Which versions of Cognos Analytics are having issues with scheduled reporting? We are planning to upgrade to 11.2.4. Does this issue with schedule reports exist in 11.2.4?
A: No, we are not aware of any issues with scheduling.

Q: Is it standard to stop Cognos Analytics services before adjusting most configurations?
A: Most changes require a server to be stopped and started.

Machine transcript

0:11
Okay, welcome everybody today’s webinar Talk with Todd.

0:16
Cognos Performance Tuning Q&A.

0:17
Today’s agenda, we’ll do some quick introductions, get into your questions.

0:24
We did receive questions both about a week and a half, two weeks ago, some just a couple days ago.

0:31
And then we will probably obviously receive some today during the Q&A panel.

0:36
So I’ll try to get them in the order they came in.

0:39
And if, like I said, if I don’t get to all of them today, I will try to post a written response on our website.

0:45
So don’t worry if I don’t get to you, we will get a response out there.

0:49
So just be patient.

0:50
We have a lot of questions coming in.

0:53
Once we get to the questions, I did want to do if there’s time on the IPPA demo, which is a new tool somewhere to like the zippy analytics that you may have used in the past.

1:04
It allows you to get some really specific timelines and performance around specific still running reports.

1:12
So I wanted to show that and then we’ll close out with some additional resources.

1:16
Touch on Senturus and then you’ll get to some more Q&A the very end.

1:21
Quick introduction on myself.

1:22
I’m Todd Schuman.

1:24
I run the Cognos Optimization Installations optimization practice lead here at Senturus.

1:32
I’ve been working with Cognos since about 2000.

1:35
I’m going back to the old days of, you know, Cognos impromptu and, you know, staying as current as I can on the new releases.

1:43
You know, currently Cognos 12, which I’ll be running some demos from today.

1:46
So if you haven’t seen Cognos 12, you get a chance to see that as well.

1:51
I was getting to some questions.

1:53
These are the ones that kind of came in, you know, 7 or 10 days ago and then I’ll kind of get to as many as I can here.

2:00
First one I got is, you know, what can we do to help prevent reports that get stuck in a pending status.

2:07
And I usually recommend, you know, let’s to try to get some more information is sort of a generic status.

2:11
There’s lots of things that can cause this.

2:13
The first thing I would do is you know what is the Cognos run history telling you?

2:17
Is it giving you any clue as to where the issue is?

2:21
I usually see three kind of main causes for this, the first one being that that mail server issue which lot of organizations run into.

2:30
You know Cognos has generated a PDF or an Excel or some kind of report output on a schedule and it’s trying to pass it out to the mail server and there’s something incorrectly configured on that mail server.

2:42
So it’s just stuck in a waiting and it hasn’t passed it off, which is going to be related to just kind of getting your mail server connection the SMTP settings correct.

2:52
Another issue is jobs can get stuck in the notification tables and depending on how you have that set up, you know you may have split off a separate notification database.

3:02
If not, it’s going to be a set of tables in your content store database and those kind of contain all the jobs that calcus has to process.

3:11
So if you crashed or things were kind of stuck in limbo in there, you’re going to need to kind of clear them out.

3:19
And there’s some scripts that I can show you that will do that.

3:21
And then the third sort of major one is just expired credentials you’ll occasionally do to various changes in your environment need to just renew your credentials and it’s very easy to do.

3:31
I’ll show that as well.

3:32
So again, the first one is this SMTP, Just a couple quick notes on that.

3:39
I think I’ve got Cognos here, you know, so in your notification settings and you’re pretty limited on you know what you can configure here.

3:48
You’ve got, you know, the mail server, you may have a user ID and password this default sender and then do you want to turn SSL encryption on?

3:55
So not a whole lot of settings here.

3:57
There are some settings to turn on TLS if you’re using Office 365, it’s required.

4:02
There’s also some settings in the dispatchers under the delivery services where you can set the default sender to be you know this and as it has to come from a specific account even if you have the account and password correct.

4:15
So again if you’re having issues with your mail server reach out in the chat, we can try to take a look at that offline.

4:21
But typically you know like I said it’s very limited what we can do when the Cognos server it’s mostly on the mail server but there are a couple key things that can prevent you from having the schedules not go and it’s kind of get stuck in those waiting queues.

4:33
The other one on the list was the Notification store tables.

4:39
So there’s scripts in your Cognos installation directory.

4:41
This is where I’ve got it.

4:42
I’ve got it on E Program Files, IBM Cognos Analytics 12, and in the configuration directory there is, I think it’s in Schemas.

4:52
Yeah, Schemas.

4:53
You’ve got a bunch of scripts that live here that can do various things to kind of help administrators.

4:58
This one specifically is under Delivery and they break them out into the different databases that it supports.

5:04
So if you’re on Oracle, obviously it’s Oracle.

5:06
You know DB2DB2.

5:08
But I’m in SQL Server so I’m usually running these.

5:11
The main one here is this NC DROP.

5:13
So if you’ve got a bunch of jobs stuck in your queue that aren’t, you know, going away, you can just grab these really basic scripts here and just run it directly against your notification database.

5:27
Again, that you may have that separate.

5:29
I do recommend notification database.

5:31
If you’ve been getting my webinars, it’s definitely a thing that can help speed up your environment.

5:36
Because again, these tables are all going to be part of the standard content store if you don’t specify the separation of them.

5:42
And if you are a very, you know, delivery schedule heavy organization, you’re going to be constantly churning and running lots of connections through your contents or database.

5:52
So by separating that, it allows the contents or database to focus on just the primary content manager tasks and not to deal with all these notification databases.

6:00
So again, run this on the correct database, it’s going to basically drop those tables and when you restart Cognos it’s going to recreate them fresh and you won’t have any of those lingering tasks that are in your in your list there.

6:13
And then finally the credentials.

6:15
I believe the buttons change around all these versions but I believe it’s under Profile and settings credentials.

6:23
There’s just this renew button here so it’s possible your credentials just need to be renewed.

6:28
You can just click this button and it’ll update them for you.

6:31
So those are sort of the three main causes I would say at that you know can cause these reports to get stuck in a pending or awaiting status.

6:41
If it’s none of those, again reach out separately.

6:44
It’s just up as a note in the chat.

6:45
We could try to, you know, set up a one off to look at something more specific, but there’s so many things that can happen.

6:51
I just find these are sort of when I when I run into them, it’s usually one of these three.

6:57
Next question.

6:58
And there’s a lot of questions.

6:59
You’ll see there’s some overlap on these questions about configurations for DQM and tuning the DQM query, query service and routing traffic.

7:09
So the question was, you know, what’s the best configuration for DQM?

7:12
Should we tune one specific server for DQM Query service and then route all the traffic there?

7:17
What are the pros and cons?

7:19
And like almost everything we’re touching today, you know, it depends on many different factors.

7:25
You know 32 bit dispatchers can run CQM and DQM, so there’s nothing preventing you from running DQM on a 32 bit server or having, you know, mixed mixing and matching them.

7:37
You know, basically with since Cognos 10, you know there’s DQM has been available to run, you know on the same server as CQM.

7:45
Only when you change the report execution mode to 100 percent 64 bit, which is where’d my server go.

7:54
You know the kind of configuration, there’s this setting here, Report server execution mode and you can make it 3464 bit.

8:03
You really don’t get a whole lot in my opinion.

8:05
Making this change, you’re just going to basically completely eliminate the ability to run DQM.

8:10
I’d say the only real reason to do this if is if you want to just block the ability.

8:15
You know, so you’re relatively new Cognos customer.

8:17
You don’t have a lot of legacy 32 bit CQM packages and reports running.

8:22
You can prevent them from being able to create or build anything by making the 64 bit, but you’re not going to see a whole bunch of performance enhancements or any kind of major benefit by switching these to 64 bit.

8:34
So I usually recommend to leave it that way.

8:36
It’s obviously much more difficult to migrate all your DQ, I mean all your CQM, DQM.

8:42
Especially if you’ve been a long time college customer, you’ve probably got a lot of stuff around there and my usual recommendation is, you know, you don’t have to go in and recreate everything, but all new development I would recommend to use DQM.

8:54
So you know as far as tuning that, I mean there’s settings in the in the dispatcher around the query service.

9:02
So if you go to your dispatchers and I’ve only got a single one here, but you can go into, you know, the query service and you’ve got you know, specific tuning settings here.

9:13
The main ones being, you know these are the two that you know, I typically focus on.

9:17
There’s an initial heap size and then the Max heap size.

9:20
So I find it helpful to bump this up normally if you’re if you’re using DQM have it start at maybe like 4 gig instead of 1000 Meg and then if you’re you know running a lot you can exceed the default which I think is 8000 you can make it you know 1216.

9:37
If you’ve got a really beefy VM with you know 64128 gigs of ram you can make it even bigger obviously.

9:45
So these are going to be sort of the main 2 settings for tuning your DQM and your Java heap.

9:52
And then you know as far as you know load balancing, you know specific servers, you can do that as well.

9:59
I don’t really see a whole lot of benefit of having you know one fully dedicated DQM and one not unless again you’re trying to move everything off CQM and you want you know things to only run in 64 bit mode.

10:13
But there’s not really a huge pro and con on separating the DQM in the query service.

10:19
I’ll touch on some routing roles.

10:20
Another question in a couple slides more about you know load balancing between batch interactive.

10:25
But as far as the DQM, again there’s no read to no reason to really separate them.

10:31
32 bit mobile handle DQM just fine, there’s not you’re not going to see any major difference by switching to 64 bit execution mode.

10:42
So our question is, our Cognos is sometimes so busy that the admin cannot sign into the portal and it takes a long time to do that.

10:49
What can we do other than restart the server?

10:51
This is another question that came in in several different ways and a pretty common question.

10:58
So typically this usually means that the server is over utilized, that it’s not able to handle any additional requests because it’s so busy.

11:05
That includes, you know, getting into the admin screens and being able to see what’s going on, which is obviously a big problem.

11:09
You know you can’t tune or troubleshoot if you can’t see what’s going on.

11:13
So typically the best way to do that is to adjust, you know your high and low affinity requests and the connections on the batch and report services.

11:20
So if you’ve been to any of my other webinars in the past, I touch on you know how these work and I’ll go into these a little bit today.

11:26
I don’t want to go too deep into this, but essentially you know you have high and low affinity requests.

11:33
The low affinity requests are going to be the reports that are running.

11:36
So how many reports can I run?

11:39
The high affinity requests are things that are you want, you know, split second response times, navigating to the portal, you know, clicking folders, you know, getting through those admin screens, those are high affinity requests to kind of view the different things that are in there.

11:53
So most likely there’s not enough connections enough, not enough high affinity requests to be able to see the admin console or be able to get a successful, you know, page to load.

12:04
So by lowering the number of connections or lowering the number of low affinity requests you’re going to be able to kind of free up and give the give the server a little bit of breathing room.

12:16
Another thing to do is you know if it’s just certain times of the day or certain times of the month, year I like to create audit reports that show usage over time.

12:24
You know, I usually like to create, you know, a monthly, a weekly and a daily that just show, you know, spikes and dips in usage.

12:32
And you can easily build these, you know, with the audit package, if you assume you had that turned on in your login, you know, usage, you could see, you know, a certain time of the month, maybe the middle of the month or the first or the end the month, close type deals when things are getting really busy and then you kind of know ahead of time, OK, this is when it’s going to get busy and kind of pre actively make some changes, you know, to scale things up, scale things down.

12:54
If you have that ability, you can also see, you know, on like a more granular level in a specific day.

13:00
You know, it’s quiet until 2:00 PM and then things, you know, get crazy because there’s 35 jobs that all run exactly at 2:00 PM.

13:08
So again, it’s going to depend on every single organization how things are set up, You know what’s the activities, what the business needs are.

13:16
But by using audit reports and just monitoring the CPU, checking these high and low affinity requests and the connections on specifically the batch and report services, it should help you to, you know, find why is it getting busy and put some governance in place to make sure that it’s not locking you out of important admin screens.

13:38
Other ones, you know, similarly related.

13:41
Our kind of reports are running slower than usual.

13:43
What is the best order for checking the system of reports?

13:47
Again, it’s most likely there’s just too many things are running at once.

13:50
You need to kind of put some restrictions on the dispatcher and lower the number of requests that it can handle.

13:56
So instead of having, you know, 12 reports that running all at the same time and everything runs slow because there’s not enough utilization on the server, it’d be better to, you know, cap it at 8:00.

14:09
Yes, you have four people who have to wait for the report to run, but at least it’s going through the queue, it’s not locking up and it just kind of bogging everything down.

14:15
So one of the biggest things I see with, you know, unstable environments and reports running slow is that they’ve sort of modified those high and low affinity in the connections to something much higher than that server can handle.

14:26
You know, leaving the default, the default one that’s in there is usually a pretty safe bet it won’t get you in too much trouble.

14:32
Can you may want to bump it up if you have, you know a lot more resources.

14:35
I think the default that Cognos sort of specs for is maybe 2 to 4 CPUs and like 16 or 32 gigs of RAM, which is sort of just like a basic installation.

14:44
But if you’ve got a lot more than that, you know you’ve got a 8 CPUs and you know, 120 gigs of RAM, you can obviously bump that up, but you want to be careful how far you bump it up because if you’re taking on way too many reports, nothing’s going to get run.

14:55
So typically, you know, check when this is happening, look at the server itself, what’s your CPU, RAM utilization look like.

15:03
It’s possible it could not be Cognos.

15:04
It could be the database.

15:05
You know maybe there’s too many requests going to the database.

15:07
If you have the ability to, you know see that information or work with the DVA, you know see what it looks like on their end.

15:13
You know how is how are these queries running are they is it handling it fine, Is it is the database overworked And again audit reports are really helpful here as well.

15:21
I’m going to kind of do a little bit of a deeper dive on audit reporting in a little bit.

15:25
But being able to see, you know, is it, is there a certain time that happens, you know, frequently you’ll be able to kind of see that with some of these audit reports.

15:36
You know, is it getting extra busy every day at a certain time or certain day of the month, things like that.

15:40
And then these IPA, you know, Internet or interactive performance analyzer and the logs itself can also be helpful.

15:47
You know, if there’s errors in there, you can check the Cognos server logs, there’s various logs in the installation directory that may point you in the right direction as well.

15:56
So you can kind of go in there and see if there’s anything you know, blaring that you know there’s a major issue or major error which obviously would be sort of a1 off thing you have to deal with.

16:06
But it’s sort of a good last resort to check those logs.

16:11
So as far as you know tuning these and I mentioned the high low affinity in the connections on the batch report services, we do these call them health checks where we kind of review environments that customers are having issues with and providing recommendations.

16:26
And we had one just a couple weeks ago, a month ago where one of the reasons why I was so busy was everything was all the schedules were set to run at you know a certain time of the day instead of staggering them out, you know later in the hour or later in the day, the server would just take on like 100 plus reports all at you know 9:00 AM every day and things would just come to a complete halt.

16:48
So by kind of moving them out, putting some, you know, some controls on the dispatcher services, it was able to kind of free up a lot of those resources, be able to kind of process through that list and get through it and then be able to handle all the live interactive requests coming in as well.

17:06
And then finally, I love data sets.

17:08
I talk about them too.

17:09
You know, if there’s performance is an issue, data sets are kind of a get out of jail free card, these are great.

17:15
You can kind of load these on a schedule, take the database and a lot of the, you know, query plans explain pants out of the equation, any kind of network connectivity, just precast the data into a very fast data set and build reports off of that.

17:29
It should be extremely quick and one of the best, you know, things I think in in Cognos 11 that they’ve added these data sets.

17:37
You can if you if you get the tools like Tableau or Power BI, you know, it’s like a Tableau hyper extract or an import where you’re going to suck all the data into Cognos itself and it’s just optimized for reporting extremely, extremely fast.

17:48
So that’s sort of a last resort because it’s going to be, you know, more specific to probably one report, but you can definitely have them, you know, address multiple reports if you kind of take a step back and look at, you know, here’s all these different reports that are accessing this specific data which seems to be a bottleneck.

18:03
Let’s build a data set and you know, maybe tie it together with some data modules and you can kind of build something that’s going to be very, very fast for reporting.

18:15
What are best practices around tuning them or BI bus processes?

18:18
Again similar questions, you know these are all going to be controlled via the report and batch services in the high and high and low affinities, number of connections you got things like peak and non peak.

18:30
The BI boss again is, is going to be the CQM engine.

18:34
So you got, you know, 2 to 4 gigs of RAM as sort of a cap, depending on, you know, whether it’s 32 bit or 64 bit operating system, which I believe, you know, it’s pretty rare I see 32 bit anymore.

18:46
But I’m sure that they’re out there.

18:47
And if you were still running kind of 10, I think someone I saw in one of the polls, one of the Lebanons was running Cognos 8 somehow, which was mind boggling.

18:55
But and these are all dispatcher settings that you can get to.

19:00
There’s my properties.

19:04
You go to tuning, you know, I talk on this in in lots of my other webinars, but you know, there’s 87 or so, you know, different things here, which is very overwhelming.

19:19
But as I kind of say, you know, there’s a lot of repetition.

19:22
You know you just got these three number of high affinity connections for batch, number of low affinity connections for batch and number of processes for batch.

19:29
This is during non peak.

19:31
These same three exist again somewhere else, you know during peak and swap out the word batch for report and you’ve got you know 3 for report during non peak, 3 for report during peak.

19:40
So it’s very repetitive.

19:41
So you know what to look for and you kind of tune these a bit.

19:46
You can either lower or increase depending on you know what your utilization looks like.

19:49
So again if you’re if you’re looking at your busy time and you go to your server and you see do I have open you know I’ve got pretty low CPU utilization right now and I’ve got you know less than half of my RAM is being used and no one’s, no one’s really on this service is just sort of a test sandbox Senturus server.

20:08
But if this was you know peak day and I had all my users in here and I saw things like this.

20:13
I mean I’m very underutilized so I can bump things up.

20:17
I have enough RAM and a CPU to allow the ability to increase.

20:21
You know these.

20:22
These settings here, if I come in and I see you know this is at 100% and my RAM is almost, you know, completely topped off, I probably want to scale it back down a bit.

20:30
It’s probably too high.

20:31
There’s too many things running at once.

20:33
So by kind of tuning these different groupings, you can kind of get it so that you know, you could find a happy medium in there so that things are running as best they can without overloading the server.

20:44
And you’re also not kind of wasting and running a overpowered server that’s not doing enough to justify the hardware that you’ve got on there.

20:55
How can we identify the limit of the report count and schedule jobs on each server?

21:01
I wasn’t exactly sure what this question meant, but I’m assuming you know you want to schedule jobs to run on different servers.

21:10
So you know, I touched on the audit reports and then routing rules so the audit reports.

21:18
Do I have it in here?

21:20
There it is, but with routing rules.

21:25
If you haven’t heard of these or haven’t used these, these are really useful.

21:29
You can again this assumes you have a multi tiered environment so you’ve got at least two dispatchers or more.

21:35
But you can go ahead and say you know I want to run and you have three drivers.

21:39
You can base it on packages, on groups or on roles and you could say you know if anybody this package I want it to run against this server or this package goes against that server.

21:49
Or you could split up in in various different ways and just direct traffic, route traffic to different servers.

21:54
And one of the best, the best ways I’ve seen to do this is if you know you’ve got a you know your environment and you can’t just schedule all the jobs, you know off hours non peak hours.

22:04
And you can’t just say I’m going to start scheduling all these reports at 6:00 PM and let them run until 6:00 AM.

22:10
Your job requires or the organization requires, you know reports getting sent out all the day, all through the day.

22:15
There’s no downtime or ability to kind of take that away.

22:18
So because of that it’s very, very busy and the people who actually are going in there and trying to run reports are waiting a longer time.

22:24
They’re seeing that spinning blue wheel because half for you know a lot of the resources are chewed up by all these things running in the backyard and sending out emails and jobs.

22:33
So to kind of offset that you can dedicate a specific server to be, you know, batch mode only and route the traffic to that.

22:40
So that it’s if it takes a little bit longer for someone to get an e-mail in their inbox, you know, they’re not going to notice as much as a person who’s sitting there staring at that blue wheel and they’re getting frustrated because it’s taking a long time.

22:50
So they by splitting it up, you know, they don’t see any degradation of performance because it’s completely optimized just for that user type and you have all the jobs in the background running and going to a specific service.

23:04
So routing rules are going to be a great way to do that.

23:06
And you know here’s a sort of a screenshot of a customer we did a little while ago that has this in place.

23:12
So you could see I blurred the IP’s but the ones that are kind of highlighted in red here are dedicated batch servers and you could see you know the different report types here you know 153,000 going to the batch view 180,000 and very you know minor report service going through 900 you know two.

23:31
Whereas the ones that don’t have the red, you know only one a couple 1000 going through where you have 319,000 you know 325,000 going to report specific services.

23:41
So this kind of does that what I’m talking about you’ve got dedicated batch dedicated report and this kind of makes everybody happy without you know causing performance to go down.

23:50
So you can see you know with his IP addresses you know what servers are doing what and how many how many processes and how many you know requests for each ones are handling.

24:01
So by doing so, you can kind of get a better view of what’s going on with each server, you know what kind of requests is it handling and then if it’s an issue, you can set up routing rules to kind of push it one way or the other.

24:18
The question that again, I am probably not the best person to answer this, you know, what are some of the advantages of the AI Assistant.

24:25
This is more for, you know, a data scientist type role.

24:28
It’s kind of each version of Cognos is more and more, you know, this is sort of the hot new thing.

24:33
This artificial intelligence is GBT type things to be able to ask questions, get insights that maybe wouldn’t be obvious to you.

24:43
You know it’s kind of the new Cognos 12.

24:45
This is front and center baked right into the front here you can just ask questions and things like that.

24:51
It can build dashboards, but you can just ask it to, you know, create a dashboard for you, which is kind of cool.

24:59
Whether or not you know it’s going to be helpful, I don’t know.

25:04
But you can, you know, open up a package.

25:07
It can provide some insights.

25:09
You can say, build a dashboard, I believe.

25:11
I think hopefully this works dashboard.

25:16
I know they’ve also added a lot of like natural language to this as well.

25:30
So you can you know once you’ve got a like a data set or you know model or something loaded you can ask questions like show me you know year over year analysis of you know the products X in revenue.

25:44
And I think it can like build things like that for you.

25:46
But right now it’s going and you know just with a quick request it’s built me this three tab dashboard and I don’t know this is anything really helpful here but it’s kind of cool that it did this for me and I can just kind of go through and see you know different things that it thinks may be relevant for you know the quick scan that it did.

26:07
So expect to see a lot more of this you know as Cognos 12 you know gets more releases but it’s got value.

26:15
Again I just as far as you know I’m more administration report development modelling not so much into this data scientist type role but if you are I know it’s got some value but I’m not really the right person.

26:27
I know we have a Cognos 12, you know what’s new webinar coming up in a few weeks and you know that would be a good one to ask a question.

26:36
If you’re interested from IBM, we’ll be presenting that they can probably give you a better demo or give you more information about the road map for where this is headed.

26:43
But I know it’s got a lot of a lot of hype and this is sort of the new area that you can see here they’re putting this front and forward in Cognos 12.

26:52
So it’s the top, top thing here.

26:54
It’s also baked in up here so it’s everywhere Okay.

27:01
I wanted to just show this performance assistant, product performance assistant which it used to be called IPPA which is built into the reporting tool.

27:14
There’s add on in Cognos 12 now called the IPPA which can give you some really cool information if you have store performing reports as to kind of where the issue is.

27:27
So one thing to know and this drove me crazy when it first came out, they kind of they teased this as something that was new in Cognos 12.

27:34
So there’s a link how to set it up.

27:37
It’s not correct.

27:38
This is actually the correct link.

27:39
It’s not done in the area they tell you this is actually set up different areas.

27:46
So they tell you in the first document if you kind of search for it that you need to put it into this configuration area which there’s some you know advanced settings that you can type in here to do different things.

27:58
But typically you don’t have to do this.

28:00
They’ve had some things that had to be set here in the past like to hide banners and stuff like that.

28:05
So this is where they were saying to turn it on.

28:07
It’s actually not correct.

28:08
It actually it has to go into the dispatcher setting like this disp zippy IPA enable true.

28:21
So you have to put that in.

28:22
As soon as you put that in, it’s going to create about 10 new tables in your notification database if you have one, otherwise it goes to the content store.

28:32
So again, another reason to separate the notification database.

28:35
You’re going to have even more tables living in there now if you turn this on, but once you’ve got it on they give you a bunch of packages and built in visualizations.

28:47
Let’s see here and some built in reports that are all going to be drilled through related.

28:54
So it’s what’s cool is that you can go and see.

28:58
Let me run one of my samples here.

29:01
It was sales growth year over year.

29:04
So this is just one of the standard Cognos reports.

29:12
So that the regular IPA has been here for I want to say it since 11 zero.

29:18
But you can just check this box under Run options and say include performance details and if you go ahead and run this now it’s HTML only to get this information.

29:29
If I run it, it’s going to show me the amount of time it takes to render and query the data for each one of these objects.

29:39
So I’ve got you know, three different objects.

29:41
I’ve got 2 visualizations and a I guess a list report here.

29:45
And you could say I’ve got execution times.

29:47
It took about 3 1/2 seconds and add those, it took, you know, to render the chart.

29:54
It took, you know, less than half a second to query that data.

29:58
It took three seconds.

30:00
Down here, you know, .2 seconds to render and .8 seconds to query because it’s using the same data.

30:09
And then finally over here I’ve got a list report.

30:13
It took, you know, 33 seconds milliseconds to render and it took about 3 1/2 seconds to query this data.

30:18
So you got a slow running report.

30:20
That’s a good start.

30:21
You can kind of see is there a specific object, you know, a list, a cross tab, a visualization on the page, that’s the numbers just jump out of you.

30:29
Yeah, you have like 3 1/2 seconds here or something is like 10,000 seconds.

30:32
OK, that’s a good place to look.

30:34
What makes this even better is you have this page, total page, you know, here’s what the whole page took.

30:40
This used to just be a dead link.

30:42
Now because I’ve got the IPA enabled, this is a drill through and I can just click on it and it’s going to give me a ton of information that’s even more helpful.

30:50
So I can see exactly.

30:51
Now you know what all the different services are.

30:55
I’ve even got information about the process ID that can ties to the process ID on the server itself.

31:01
So I can see this BI Bus service took about 13, I’m sorry, 348 seconds, probably milliseconds.

31:09
And I can go and see this process ID 13520.

31:20
Thirteen.

31:23
And here it is so I can actually see the entire process.

31:26
You know it tells me exactly what is running on the server itself.

31:31
So these are all tied to specific processes on the server and you can see you know based on that you know where the bottleneck hard.

31:39
So it’s actually the dataset Java service is taking you know the longest about 6 seconds.

31:43
I can drill into this even further and it’s the query service.

31:48
I can go into the query services just kind of just keep going down and down and I can see, you know, it’s the JDBC result set that took the longest amount of time.

31:56
I’ve got things in here like the expression metadata.

32:00
It’ll show me all the different, you know, fields that it’s pulling and where they’re you know, where they’re coming from as far as the model.

32:06
So I’ve got, you know, some macros in here and it kind of ranks these base on the time so you can see all the different fields in there.

32:13
It’s also got back one page.

32:16
I think it’s the physical model.

32:19
It’ll show you the tables that it’s going out and getting.

32:22
So these are all tables at the actual, you know physical tables and the time it takes you know to access these.

32:27
So you can really go down pretty deep into this and see where everything’s at.

32:32
You can also look at like a grant chart that shows the entire report itself and everything that it did.

32:40
It might be kind of hard to read, it’s a lot of information, but you can see you know the amount of time it takes from beginning to end.

32:49
All the different processes that happen, you can filter on them.

32:52
You know, you could say I want to see just you know the BI bus service and then here you can kind of mouse over and see you know at first it was running the specifications and getting that, then it was it’s hard to read.

33:09
Sending the request to the server, executing the thread, you know, it gives you a very detailed view of everything that it’s doing in order.

33:17
So kind of help you find.

33:18
You know maybe there’s a something in here that’s really slowing everything down and you can even get you know nice interaction with all these different prompts to focus on specific things like execution, assembly, all the different things that it does the query.

33:33
So and this I think is a really nice feature that’s baked in.

33:37
Again, you all you have to do is just sort of add that dispatcher setting, go to, you know, import the package that has, you know, the model and the different visualizations in the in the reports.

33:48
And then just when you’re ready just turn on that IP and you can just drill into reports.

33:52
So if you’ve got reports that have been, you know, driving you crazy and you can’t seem to figure out why, it’s very easy to set this up, my opinion.

33:59
And you can really get a good look at you know the explain plan in Cognos and all the different things that it’s doing and try to pinpoint, you know, here’s where I think the issue is so pretty valuable, a nice feature that they added especially if you’re, you know constantly tuning and looking to kind of troubleshoot some of these things.

34:18
So jump back to the slides and then get to some additional questions.

34:24
And so yeah, make sure you use this link, this deck will be posted.

34:28
But this is the this isn’t going to help you.

34:30
This is the one you want.

34:33
Again, just kind of some screenshots of how to do it.

34:34
You can drill into the detail, the different color coding.

34:39
The blue is Java near the query engine and then you’ve got green, it’s going to be your BI bus and then yellow is Dataset services.

34:48
So you can kind of get a nice color coding as to what components are running, what’s taking up the longest time and how long things are running for okay.

34:58
Let me go grab some more questions from today.

35:04
Let’s see, I had one that came in that asked.

35:07
Have you ever run into a situation where non root users suddenly lose access to various packages with a random intermittent occurring error?

35:16
You don’t have permission to the object.

35:18
It occurs every few months despite no permission changes being made.

35:24
I don’t know if I’ve run into that before.

35:29
The only I can think of is if you’re doing deployments from you know, dev, QA, prod, something like that.

35:35
It’s possible that when you promote something up it has different permissions.

35:41
Other than that I’d have to sort of see exactly what was happening.

35:44
I don’t know what that could be.

35:47
And then another CQM DQM.

35:50
We run both CQM DQM on a 32 bit server which appears to be generating dump files.

35:55
Are there any settings or advanced parameters to tweak on dispatchers running in CQM 32 bit execution mode that may reduce DQM triggered dump file generation?

36:07
Again, it’s kind of a specific one I will say you know the dump files that you get, they do usually have a time stamp.

36:16
I find one of the best ways to sort of figure out why it’s happening is, is to look at the either the audit log or if you have it in your you know prior activities.

36:26
Do I still Cognos open?

36:29
You know, you can go to past activities if if it’s not too far, you know, go to a specific date and see if you could find something that was running either at or during the time of the crash.

36:41
Sometimes it’s just a bad report.

36:42
You know, something was just not built correctly.

36:45
It’s got, you know, custom SQL.

36:47
It’s got, you know, cartesian joins.

36:48
It’s a lot of different things that could happen.

36:51
If it’s not a specific report, it just seems like it’s more than that.

36:55
There are some settings in Framework Manager.

36:59
Let’s see.

37:00
So Framework Manager.

37:05
I mean people ask about the DQM conversion.

37:08
If you haven’t done it, you’re not familiar.

37:10
I mean it’s as simple as where are my properties.

37:15
You know, if you have a package that’s currently you know in CQM it’s just this is set to, this is set to compatible mode and all you have to do is just change it to dynamic mode and republish it.

37:30
You also need to have a JDBC connection.

37:34
That’s really all you have to do to convert to DQM.

37:36
You know, republish your package and have that data source that it that it points to.

37:42
Make sure that that is set up for JDBC.

37:45
That’s.

37:46
That’s all it is.

37:48
The real challenge is is just the testing.

37:51
It’s much more strict.

37:52
There’s certain things that may run fine in CQM that don’t run as well or at all in in DQM.

38:01
There are as well some governors in here that you may need to tweak.

38:05
If you ever looked at this screen down at the bottom here yeah so this whole last block of stuff here you notice they have the DQM kind of in in parentheses.

38:19
These are all specific optimizations or governors for DQM.

38:25
So it you may want to look at some of these kind of kind of tweak these a little bit and see if any of these are sort of the issue.

38:31
Again, I don’t know exactly what you’re seeing or what’s causing it, but you can change these sort of governors possibly to get some better performance or maybe to you know fix things that are not running and these are sort of the defaults, but these can be easily changed and republished.

38:47
So maybe try you know to change one of these republish and see if this fixes anything for you.

38:52
But there are some specific DQM governors in here that may or may not you know help fix your issue.

39:01
Next question again, but gotcha is going from CQ and DQM we’ve gone from version 10:50.

39:09
There are performance concerns.

39:10
We attribute these facts to the conversion.

39:13
Are there anything which look at a global level.

39:15
So basically the same response, you know, you know outside of you know knowing and being able to see the models.

39:22
I would check the governors.

39:23
I would just kind of look at, you know, just, you know, base created some basic reports out the model, you know, did they run fine?

39:30
Is it the report itself?

39:32
There’s certain you know, possibly bad design reports that you know could be causing an issue.

39:40
It’s hard to, it’s hard to know exactly what the issue is.

39:44
But I mean technically, you know, a DQM package should run fine.

39:48
There just must be something in there, some logic or the way you modeled it, some bad joins, there’s some bad calculations, something, something either in the model or report that’s causing an issue.

39:58
And without, you know being able to kind of do like a deep dive into that, I can’t really think of anything else, anything else offhand that would that would be something that we could use to fix that.

40:09
Another question was about.

40:13
Affinity settings, I think I kind of covered that.

40:15
You know there’s the low and high affinities and low is going to be the ones for reports that they’re slower, you expect them to kind of be a lower, slower response time, whereas high affinity are things like you know, me clicking through here navigating these you know.

40:28
So again back to that first question about you know this, this locks up most likely you don’t have any high affinity requests available to let these run.

40:35
You should be able to kind of quickly or some quickly navigate through.

40:39
I mean sometimes these take a little while.

40:40
If you have a lot of stuff in the queue, it has to go and query the content store database and show you, you know what’s in there.

40:46
And it’s also possible your content store database is just overwhelmed.

40:49
You know you don’t have enough resources to kind of handle all the connections and queries to that.

40:53
So if if the screens you know that are not, you know, static like this one or this one are taking a long time to load, you know the statuses or the schedules.

41:04
It’s possible that the issue is the content store itself.

41:06
You know, do you have a lot of bloat?

41:08
Do you have enough connections?

41:09
Is the database itself you know properly sized to handle?

41:13
You know the amount of usage you have.

41:16
And as far as bloat, I went to talk about this as well.

41:17
I forgot there’s subscription here.

41:20
You can run to clean things up so the consistency check.

41:26
If you haven’t heard of this, this will go and scan any profiles that are created by users.

41:32
If they no longer exist in your LDAP or Active Directory, it’s going to find a mismatch and you can have it either find them or find it and fix them.

41:40
And it’s good to fix that.

41:41
And that’s going to remove any kind of old references.

41:44
Clean up their my folder so they have a bunch of junk you know in there that’ll clean that up.

41:50
These content removals and retention rules are also useful, so most likely you know people don’t set the number of versions, the number of outputs you can create.

41:58
So I’ve seen, you know, Cognos shops that have had, you know, a user for 5-10 years who didn’t even know they were doing it.

42:06
They were running, you know, 100 page PDF and saving it unlimited versions every day just in the content store database.

42:13
And so there was this, you know, huge amount of output that was just stored and just bogging down the contents or database because there was just something running and saving over and over and over again these snapshots of the same report and never being cleaned up.

42:27
So you can go and run these scripts and say, you know, remove anything from six months ago or remove anything that’s got more than five versions.

42:34
They’re pretty straightforward.

42:34
You can kind of go to these different Wizards and just kind of create these scripts and run them on a schedule to kind of keep things from getting too outdated or people stacking up, you know, too many saved output versions.

42:45
Those can help a lot.

42:46
There’s also a script that can kind of give you, you know, if you’re not sure if that’s if you’re a candidate or something like that, there’s a script in that scripts folder.

42:56
I think it’s under content.

43:03
Yeah you can get this this profiling scripts and this will basically give you a complete dump of like all the different object types you have.

43:10
You know how many reports, how many query reports, how many data source connections.

43:15
It’ll also show you the amount of output file outputs that are saved in your kind of database.

43:20
So if you do see this being really large or you just need your Dbas complain, you know hey your kind of database is you know 300 gigabytes.

43:27
It shouldn’t be that big.

43:30
You know, you can run some scripts to clean it up.

43:31
So this will kind of tell you what’s out there.

43:33
And if it is really large, you can run some of those scripts straight out of here to kind of clean things up.

43:38
That will actually really help a lot, I think if you’ve, you know, got a lot of junk in there.

43:45
Another question about the admin console seems to only load in the morning or evening.

43:52
When I try to regular business hours, I get a proxy error message.

43:57
Again, most likely it’s just you’re trying to access it during a busy time and it it’s not doesn’t have enough connections to make that happen.

44:05
So I would just say try to tune those dispatcher settings, possibly lower it or increase the number of high affinity connections and lower the low and see if that that helps.

44:17
Let’s see another question here about physical machines versus virtual machines for big data.

44:25
I don’t know anything specific about big data.

44:30
I, you know, physical machines and virtual machines, I feel like the physical machines are becoming more and more rare to see everything’s you know, either in you know like an AWS or Microsoft cloud or you know like a company cloud these days and they’re all VM’s.

44:49
I do know, you know there is a little bit of performance loss, you know with those, but nothing substantial.

44:54
They’ve come a long way in, you know last 10-15 years I think we used to allocate you know at another like 20% for VM, you know, so you know 1.2 virtual cores would equal you know, one physical core, something like that.

45:08
I don’t know if that’s still the case.

45:09
I haven’t seen any sort of white papers or documentation around that.

45:12
But so there’s big data.

45:14
I don’t, I don’t know anything specific about that.

45:16
I wouldn’t think there would be any sort of issue in a physical versus virtual, but again, I don’t know that much about that topic.

45:24
So I’d have to look into that a little bit more.

45:26
If you have more information about that, you know what type of big data or what exactly you’re doing, I could tend to take a look at that.

45:36
Okay, let’s go to the Q&A panel now.

45:40
Let’s see, I’ve got question about dynamic cubes.

45:46
I think cube is slow to reload after new data has been loaded into the tables thing that was loaded it takes 4 to 6 minutes.

45:53
After the day is loaded it can take over 30 minutes to reload the cube and the aggregates.

45:57
How can I speed this up and make it consistent?

46:01
I honestly haven’t used dynamic cubes in several years.

46:05
I don’t know exactly why that would happen.

46:11
Yeah I know that that tool it hasn’t been deprecated but they haven’t really invested or made any changes or updates to that that tool in a while.

46:23
So we kind of stopped using it a bit because it was difficult to administer.

46:30
I’m happy to take a look offline, but I don’t know offhand you know what exactly is going on those dynamic cubes are challenging sometimes to tune and to deal with some of the reload and the aggregations on there, they’re a bit confusing.

46:43
So if you’d like just reach out and we could try to set up like a one off look at that.

46:49
Another question on the audit extensions, these are I think if you attended one of my other webinars I talked about this.

46:57
These used to be you know something you had to download from the IBM website and there’s a whole configuration process.

47:05
So now they’ve made it a little bit easier.

47:08
I believe it’s now where is it just in straight up installation directory samples, audit extensions.

47:17
So you don’t have to download it, it’s already here.

47:20
There’s this war file that you just have to create with some scripts.

47:24
I believe it’s already built now and you can just drop it into your WL dropins folder here.

47:31
I think you have to restart.

47:32
Like there’s.

47:33
I think there’s documentation here as well.

47:34
If I go to docs, yeah, the whole, the whole setup’s here.

47:52
I might do a.

47:53
You know, I’ll do like A blog post or video post on how to set this up, but it’s gotten much easier.

48:01
This is probably a little bit outdated, but this is what they provide.

48:03
It was sort of an unsupported thing that they used to give you.

48:06
But this is great because this sort of fills the gap of the regular audit reports and the things that you know, haven’t been run.

48:16
So audit reports are going to be on demand.

48:17
So if I have 100 reports and only my users have only run 20 of them, and I run the reports to show me, you know, what’s the usage I see, I’m only going to see 20 reports.

48:26
I’m not going to know about the other half, it doesn’t live.

48:28
It doesn’t exist because it hasn’t never been run.

48:31
This is going to kind of give me a full dump of, you know, all the objects, all the reports, all the users.

48:36
So even if users haven’t been logged into the system, you know, the accounts are still there in Active Directory.

48:41
So you can kind of get a lot more information out of this.

48:43
That’s not currently possible with the standard audit report.

48:47
So this is sort of the audit extension.

48:50
So yeah, I’ll try to do a separate webinar or you know, a blog post on setting this up and just kind of reviewing how it all works.

48:56
It’s a bit more involved than just the standard audits where you kind of create a database and turn the logging on this.

49:02
There’s some tuning and there’s some things you have to kind of run and schedule.

49:05
But it’s not, it’s not terrible and you’ve got, you know, these different deployments that you can just import that have some predefined reports.

49:13
But the main one is to kind of get this WAR file published and then to kind of go and make a couple settings changes into the tool itself.

49:22
Let’s see if the user ended up running the report with a criteria pulling out their records, how and where in Cognos can you see what the user’s selected in the prompts.

49:35
So that one, typically you have to kind of see it on the database.

49:42
I know that you can put into the data source connections, let’s say like it was off this great outdoor sales.

49:49
I think this is where you put it.

49:52
Haven’t done this in a while.

49:59
I think you can put some kind of values into these open and closed session commands that that will highlight the parameters.

50:11
I’ll have to double check.

50:12
It’s been a long time since I’ve done this.

50:14
You know, outside of you know being able just to see the query running on the database where it’s usually got, you know, built in where clauses.

50:19
You know, where you know fiscal year was 2022 and product line equals, you know camping equipment, something like that.

50:25
You know it’s difficult to kind of see that information in Cognos but I know there are ways to do it.

50:30
I believe it’s using these session commands, but it’s been a while.

50:33
So let me let me research that a little bit and see.

50:39
Let’s see someone asking about uploading spreadsheets to append, not replace, my data on Cognos.

50:44
When I add a new field to Excel, it does not let me append the data.

50:48
What is the solution?

50:50
I have a lot of historical data already on Cognos and I haven’t done a whole lot of Excel spreadsheet reporting.

51:00
I might have to take the 1 offline as well and see what the deal is with that.

51:07
So you’re adding a new field that’s not picking it up.

51:10
Yeah, I’ll have to check and see if you know what version you’re on.

51:14
Can you put that in the chat as well?

51:16
Just so I know it might be something they’ve addressed in a newer version.

51:18
If you’re not, you know, current, maybe it’s something was that Level 2 or in Cognos 12?

51:23
I’ll have to check and see on that.

51:30
Do like one more here.

51:31
Then I’ll wrap up with some of the closing slides.

51:34
Just join the webinar.

51:35
I want to know what version of analytics we are having issues of scheduled reporting.

51:40
I don’t know if I understand the question.

51:44
Let’s see, Yeah, you might need to just give me a little more information on that one.

51:54
I mean all Kylus has been able to support scheduled reporting since, you know, way back reporting at Cognos 8 things like that.

52:00
So it’s not anything new, but I think I just understand the phrasing of the question.

52:05
So please you could reword that Another question on IPA, how is a different audit extension you kind of talked on that.

52:13
You know all extension’s going to give you much more information on just the overall environment itself.

52:19
Audit extension is going to give you like specifics like we saw with the IPA, you know, runtimes, you know, explain plans, you know what different processes are running on the back end on the server to kind of get that report out to the user.

52:35
The audit extension’s going to give you much more information about, you know, the objects, the capabilities, the users, things like that that aren’t easy to get currently out of the tool.

52:46
I’ll do one more here.

52:50
Here’s the one.

52:51
Is it necessary to restart Cognos on a regular basis, especially if we stop the Oracle database nightly to perform a backup?

52:56
We are currently not stopping Cognos because it recovers and loss of connection at Oracle, but the DB notices weirdness with logins and occasionally that restarting the service corrects.

53:06
So you shouldn’t have to restart Cognos.

53:08
I thought people do, I know they if your Cognos environment is running slowly, you don’t have errors and everything is working correctly, you shouldn’t really have to.

53:21
I do know occasionally some report, you know BI processes or you know Java processes can get hung.

53:28
Sometimes you know someone will kick off a long job and they’ll cancel it halfway through and sometimes things can get stuck in there, those types of things, you know, if that’s happening a lot then it might make sense to schedule maybe like a weekly restart.

53:42
But you know this this one that I, you know that I run in the sandbox here.

53:46
Obviously it’s not getting nearly as much uses as you as your environment probably is, but you know it it’ll, it’ll stay up for months at a time with no issues.

53:57
So unless there’s something, you know, like you said, we’re just going on the database, you shouldn’t really have to do it.

54:03
But it can’t hurt.

54:04
I know that people like are very religious about, you know, scheduling a weekly restart on their servers, but you shouldn’t have to do that.

54:10
And if you are seeing, you know, things like that, just reach out offline and we could take a look and see if there’s something specific that’s kind of causing, you know, some connections to hang or or some issues with Oracle.

54:21
But I don’t know anything offhand that that should have been doing.

54:24
Then unless it’s possibly a version driver issue or maybe just incompatibility between the versions, I’d have to kind of see okay.

54:32
So there’s a couple more questions.

54:33
We only have about 3 more minutes here.

54:35
So let me just kind of go through the wrap up slots and then we’ll end it and I will follow up with a posted response to the ones I didn’t get today.

54:50
Okay.

54:50
So just a quick plug here, we offer different upgrade support levels plus additional servers to ensure the success of your upgrade.

54:59
We do hundreds of upgrades, installations and configurations every year.

55:02
So if you need any help definitely reach out.

55:07
We’ve also got blogs and on demand webinars and lots of other Cognos tips, so if this was helpful today and you want more, there’s plenty of free information out there.

55:17
Just go to Senturus.com and that URL on the screen is Intelligence Cognos.

55:22
And there’s dozens and dozens of webinars going back 1015 years and all kinds of different topics that may be helpful.

55:32
Additional resources.

55:33
We’ve got free resources and website in our knowledge center that aren’t comparisons.

55:37
We’ve been committed to sharing our BI expertise for over a decade.

55:40
Just go to Senturus.com resources.

55:43
We’ve got some upcoming events.

55:46
The external tools for Power BI desktop that’s Thursday, August 10th.

55:50
Chat with Pat working with time and dates in Power BI that’s August 17th.

55:54
Power BI Report Builder and Paginated reports August 24th.

55:58
And then other chat with Pat, Framework Manager modeling Wednesday, September 20th.

56:02
So I know I kind of touched on some of the FM stuff today.

56:06
If you are doing DQM conversions that’s going to do sort of a deeper dive into FM modeling.

56:11
So that might be you know, good time to, you know have some questions or play with those governors I mentioned and have some followup questions for Pat, he can probably help you with those.

56:21
And then just a little bit background Senturus, we concentrate on DI modernizations and migrations across the entire BI stack, provide a full spectrum of BI services, training and power BI, Cognos, Tableau, Python, Azure, proprietary software, Accelerate, bimodal BI and migrations.

56:38
We’ve been focused exclusively on business analytics for over 20 years and our team is large enough to meet all of your business analytic needs, yet small enough to provide personal attention and we’re hiring.

56:49
If you’re interested in joining us looking for the following positions, you can look at the job descriptions on our website and e-mail, e-mail us your resume at [email protected].

57:01
And then finally, I’ve got a bunch of questions in the Q&A.

57:04
But if you have anything you wanted to get off your chest or just put in there now, like I said, I will grab them.

57:09
I’ll leave it up for about 5 more minutes, but go ahead and put them in there and I will grab that list and as I said, publish something out hopefully, if not end of the week, early next week.

57:22
Thanks for everyone for attending and I will see you on the next one.

Connect with Senturus

Sign up to be notified about our upcoming events

Back to top