Cognos content stores are famously stuffed with clutter. Valuable content is mixed in with decades of duplicative or unused items. When looking to migrate Cognos reports or streamline system performance, de-cluttering that pileup is key.
To determine what should be kept, tossed or simplified, requires an in-depth understanding of the content. But native Cognos tools simply do not provide a sufficiently thorough evaluation of reports and models. Nor do they reveal the underlying complexities that help to map content to the new platform or data source.
In this on-demand webinar learn how you can get a comprehensive, automatically generated inventory of Cognos content. We demo the Migration Assistant for Cognos and show how it quickly cuts through and catalogues content so you can
- Sort out valuable content from unnecessary bloat
- Dramatically reduce the content that needs to be migrated (by 97% for one client)
- Identify content impacted by data source changes
- Streamline the re-creation of content in a new platform
- Provide the insights needed to accurately budget
Practice Lead, Installations, Upgrades and Optimization
Todd is a dyed-in-the-wool data nerd with 20+ years of analytics experience across multiple industries. BI tool multi-lingual, Todd is fluent in Cognos, Tableau and Power BI.
Peter is a seasoned data architect who is passionate about utilizing technology to design and implement solutions for clients.Read more
Thank you for joining us and welcome to these Senturus webinar series.
Today’s topic is, Automate a Cognos Content Inventory to Speed up Cleanups and Migrations.
Please feel free to use the GoToWebinar control panel to make this session interactive.
We’re usually able to respond to your questions while the webinar is in progress.
If we don’t reply immediately, we’ll try and cover it either in the Q&A session at the end of the presentation or via a written response document that we’ll post on Senturus.com.
First question we usually get is Can I get a copy of the presentation? Answers? Absolutely. It’s available on Senturus.com
If you go to these Resources tab and then the Knowledge Center, you should find today’s presentation along with dozens of other presentations.
Or you can also just click on the link that was posted to the GoToWebinar Control panel and pull it down from there.
Today’s agenda, we’ll cover some introductions.
Why are we here today? Discuss some challenges.
Review how to create an inventory of your Cognos content. We’ll do a live demo.
We’ll wrap up with a quick Senturus overview and discuss some additional resources, and then we’ll open it up to a live Q and A session.
Joining me today is my colleague, Senior Consultant Peter Jogopulos. Peter is a seasoned data architect who is very passionate about utilizing technology to design and implement solutions for clients.
I’m your host today, Todd Schuman, I run the install, upgrade and optimization practice at Senturus.
You may recognize me from Box Office Smash hits such as doctor Cognos and the Metadata of Madness and the Tableau Chainsaw massacre.
Jokes aside, let’s get into it. We have a few polls to get a pulse on the audience today.
Poll number one. What are your goals for Cognos?
You need to change data sources, move to the cloud, migrate to a new platform.
You’re trying to clean up your Cognos environment, or, something else. Not listed here.
Take a second, and go ahead and just do a quick vote.
OK, we got about 80% in.
I’m not seeing them.
OK, you have got about 85% with wanting to clean up the Cognos environment, followed that by a tie between changing data sources and migrating to a new BI platform with 35%. Moving to the cloud, 25% and other, which would be interesting if you want to put that in the chat on what other is.
Great. Thank you.
OK, got one more poll: How many Cognos support does your organization have? 0 to 500, 500 to thousand, thousand, to ten thousand, over 10000, or maybe you don’t know, part of the reason why you’re here today?
Give it a couple more seconds. And, about 75% in.
OK, looks like majority of you, about 38% have, somewhere between 500 to 1000.
Right behind that.
Is someone one thousand to 10000, And then, a little bit below that, we’ve got 0 to 500.
So, good amount of content out there in your Cognos environments.
Thank you for the feedback.
Let’s go get into today’s topic. The main question is, why are we here?
Migrations based on some of the feedback. Most of you are involved in preparing for at least one of these types.
One of the most common ones is data source changes. You know, are you switching from one vendor to another SQL to Oracle, Oracle, to snowflake?
The amount of databases available today is really something else based on what it looked like a couple of years ago. Maybe you’re moving from on prem SQL server to a cloud-based instance.
So, lots of different challenges and in migrations from database aspect of it, we’ve also got platform migrations.
Maybe you’re moving from Cognos to Tableau or Power BI or, something else, or maybe you’re moving from one of those platforms to Cognos? Lots of tools available today, as well, each, with their own strengths and weaknesses.
Whenever you’re moving, you’re going to need to recreate models, reports, and make sure you capture that business logic that’s embedded in those old reports.
And then, finally, it looks like, a good amount of you today aren’t so much doing a migration, but more of an internal assessment cleanup.
If you’ve been using Cognos for a long time, I know we’ve got customers who are, you know, 10 plus, 15 20 years of Cognos usage.
At your time, it’s difficult to get a grasp on what that is. How many reports do I have? Are people using the reports the multiple versions of duplicates floating around? How can I get this information out of Cognos and try to reduce my cost of ownership?
So, what are the challenges? No surprise here, migrations are inherently challenging.
The grass is always greener.
New tool can offer to the promise of better performance and new functionality that will hopefully enable analytics and drive better business outcomes. So, what is preventing us from just making those migrations?
When properly done, a good BI environment is going to shield the end user from complexity and inaccuracies of addressing the data directly from the database.
Things like, a well defined, business logic and terminology that’s presented to the metadata, that’s easy to use, well organized, you know, breaks out different subject areas.
You have a high level of trust in that data.
So, more often than not, end users are not really seeing the whole picture.
More of just the tip of the iceberg as shown here, and what’s actually going on behind the scenes is much more difficult and complex.
We’ve got cryptic data source systems, data warehouses and tables, multiple transformations in business logic that’s being built out, data models, complex reports, and dashboards, as well on top of all that.
So, there’s a lot of hidden complexity that you may or may not be aware of that’s sort of driving the presentation layer of your BI.
Reporting and dashboarding.
So, while, you know, it’s complex in your own mind, you can see here various elements of a Cognos environment and how they add up.
So, this is a pretty common view you might encounter.
We’re mapping your data sources to your FM models to your reports.
Here, we’ve got multiple data sources in the different databases. We’ve got those that tie, it’s different.
Layers in your FM models, the physical layer, business layer. We’ve got packages coming out of that. Now, we’ve got reports, which have their own objects, as well.
We’ve got queries, objects that are driven from the queries, and lots of different pages, so it’s a lot of moving pieces, as you can see.
And when you start putting all that together, looking at the visual reports to the business logic, it gets really messy very quickly. As you can see, through some of these animations, trying a tie.
Whereas a database table map to, which goes to this field, it gets pretty ugly pretty quickly.
So, the real question is, how can we unravel this Cognos content?
To sculping incomplete migrations, you need to basically figure out what models do I have, What are the reports I have. Tie that together with usage.
As you can imagine, it’s extremely time consuming to go through this report by report manually, if you go through. You have to kind of go in and break down.
How many pages do I have, What’s on each page? What queries are driving those objects, What are the calculations and fields? Where did those map to? What filters do I have?
There’s just a lot of information in a single report, and again, based on the polls bissau, most of you have at least around 500. Some of you going over a thousand somebody to go and close to 10000.
To manually go in and try to map all that information out, is going to be extremely difficult and time consuming.
To make notice even worse, the My content, if you’ve got users who are storing stuff in there, my folders, my content area.
It’s very difficult to access those.
You have to either, you know, navigate through some, the, namespace is a, you know, active directory l-dap, and go into individual users folders and copy them out somewhere, and edit them.
It’s just a real pain and difficult to work with.
Then, on top of that, the Cognos audit data, hopefully you have that turned on. It does help things a lot. But even that is incomplete picture of the data. It’s sort of an on demand database, so I’ve got a higher reports, and only 20 of them have ever been run.
Those other 80 reports aren’t going to show up and that audit data, because it hasn’t ever been touched, or an entry hasn’t ever been entered into that audit database.
So I’m not going to know which reports haven’t been run on the fact that there’s missing from my audit database.
So, again, there’s a lot of extra work and extra process and time invested in trying to unravel that Cognos content.
Which, again, leads to these questions, you know, What do I need to clean up? How many reports do I have in total, you know, both within the team content and the My content? Do I need all of them?
Are any of these duplicates are very similar, you know, is the sales report and the sales report that, as, you know, underscore someone’s name on it in a date, you know, is that the same reporter is a different how can I clean these up, you know, does each report contain different elements, different queries, calculations, joins, prompts, lots of different things that each report can contain?
Also what are the sources for databases? Do you know exactly what reports are referencing? What tables, if you’ve got multiple databases, you know, or how can you figure out easily which ones are mapping to?
Which databases, which vendors, tables views.
So, it’s just an overwhelming process, and very difficult to kind of wrap your head around if you’re just getting started.
So, let’s talk about the process. Using a proven methodology, we can help a great we at help greatly reduce the risk cost, time, and effort.
So we have foreseen four major steps here, we’ve got the assessment stage, which we always recommend, starts with an inventory.
Need to index all your reports, package’s, jobs, schedules, etc.
Need to capture the lineage of all the objects.
Tie it back to the database tables and the fields.
We need to also capture the usage and the reports with none.
Once you’ve got that, you can kind of begin to build out your roadmap.
Know, do we want to target specific business units? If we’re doing a database migration when we’re moving from Oracle to Teradata?
What’s identify? All the reports that are hitting, these Oracle databases. Can you figure out which ones are those?
Do we want to focus on usage? These are the top 10 most run reports, we’re going to move these 10 first.
Or is there users? Let’s address the sales team or the executive team and target those user wherefore it so they running. Let’s build those and move those over.
So often, it’s the different ways to kind of target and filter down that inventory once you’ve built it.
Once you’ve got that list, again, you want to go ahead and optimize.
Can we get rid of some of these reports? What can we delete? What can be left behind?
Do I have a lot of duplicates? How similar are these other reports that have, you know, different naming conventions, but look like their exact same report.
And just leverage a lot of common models, subject areas.
If I want to move over lunch reports, do I have to create 20 different models, or are they all kind of leveraging the same sort of tables? And I can just build one master model to leverage that, and address all those reports.
And then, finally, put it all together, execute the plan.
Build out your targets, apply the optimizations, leverage that inventory to get the specs, and the model, tables, relationships, all the logic and the models, and then finally, create those old reports in the new tool.
So, it’s a lot of work, it’s a long process.
And, as I mentioned, we want to start with an inventory.
The better upfront effort to assess and do this, the more success we’re going to have.
So, obviously, we want to create an inventory.
So how do we do that?
We got two options. The manual option, which I’ve done in the past, it’s, again, very time consuming, as I mentioned. You know what?
Those of you who have less than 100, 500 reports, it might be manageable. It’s going to be time consuming, as well.
But if you’re looking at in the thousands or 10 thousands, it’s going to be extremely difficult to touch every single Cognos report and basically map that information back to the packages until the models in the database tables. So, looking at going into each report and package, and then also trying to leverage that audit data and sync it up. You’ve got report names. And how often are drawn, how many times it was run?
And basically brought together in some sort of you know Excel workbook, or, you know, maybe build it onto your own database, something like that.
The other option is to automate that. And that’s, we’re going to talk about next. You’ve got a tool that we want to show you call the Migration Assistant.
You can run this tool. It’s going to go ahead and capture all that information from your Cognos contents or database.
And, it allows you run some predefined reports that allow you to have visibility into all aspects of your content.
So, what it does is, it’s going to programmatically decode an inventory of your Cognos reports and packages and build that into a new database. You’re going to have a new set of tables and database, that’s all this information is going to get stored in.
It’s going to track all the details, including the data sources, the lineage, relationship’s usage, all that.
All that stuff that can be difficult to kind of track down outside of a report and put it all together.
This is going to eliminate hundreds of manual clicks that are basically required to manually do this, to discovery and all these data fields, filters, calculations, etc.
How it helps, one, it’s going to help you eliminate unused and duplicates.
You can instantly see all the reports that have never been run.
You’ll also have the ability to kind of see confidence level of how similar certain reports are.
Like I’ve mentioned, you know, I’ve often seen people have 25 versions of the sales report. They’re almost always exactly the same as something they have gone in and hard coded a filter, or something they have. You know, just tweak something very minor, but underneath it all, it’s the same report. You only need to migrate that one time.
You don’t need to do it 25 times, so it’s going to help you identify those. It’s also going to lie to plan and prioritize the high priority items.
And you can target multiple ways, as I mentioned, usage by different user groups, different subject areas, tables. We can kind of group and organize that inventory in different ways.
We ought to simplify and consolidate similar items, and most importantly, automate the manual work, introducing a lot of the, the errors that are, are done and occur when you’re manually doing this.
You know, if you’re copying and pasting, you know, the months strings from, you know, calculations and framework manager or in reports.
It’s very easy, you know, especially when you’re heads down, multiple hours a day, you know, to, miss something or copy and paste something incorrectly.
It’s going to systematically go ahead and grab that information for you. It’s a very low chance of errors. That just happened with manual work.
And then finally we have a bunch of dashboards and reports that are already prebuilt off this new target database that gets created that allow you to kind of review and get that information out that you need.
We call them recipes, it one need can kind of show you all the relationships lineage, It also can kind of give you a report recipe. So if you don’t know anything about your Cognos report, it’s going to tell you everything you need to know, as far as you had to have, you know, a pie chart on page one.
And, that pie chart is coming from this query, and it’s got product and time, and, you know, sales. Here’s what those three fields are in the database and the tables, they’re in.
Here’s the relationships between the tables. You need to filter and calculate this.
It’s going to have all the information very easy to consume in a single click.
It also helps reduce what we call the need for unicorns, which are skill sets, that we call it difficult to find the rare, because it’s someone who, you know, knows Cognos well enough to kind of figure out all the old information.
They also know a new tool, like Power BI, Tableau, what have you, to have, people who have those skill sets that are experts in both, those tools are very rare, in our opinion.
So, this is going to help you reduce the need to kind of find people who have those skill sets. You know, you can target focus on just the, target source and they don’t have to really know much about the Cognos reports.
It’s all going to be presented to them in the migration assistant.
Another nice thing about this we’ve seen in our other examples of working with customers on this is that we’ve seen compression factors of 90, 95% of contents. Again, those of you who have thousands of reports, you’re going to probably find that a lot of them are going to be something that you don’t need, or are very common, or it can be condensed out into a single report. It’s very eye opening.
Once you kind of see some of the content in your environment, especially on a whole, across the whole thing, that team content, and my content, and what’s out there, being able to get that view. That’s just difficult with the way Cognos is currently built, and the cryptic. nature of the contents to our database.
So, that said, I’m going to turn it over to Peter who’s going to give you a live demo of the tool. You can get a little bit more insight as to how it all looks.
And then I’ll wrap things up a little bit and get to your questions.
Thank you, Todd.
Right, So after the start was mentioned, you know, we run, we extract the data from the content store and the audit database.
And once that information gets processed, outcomes one of the summary page that we produce is this complexity analysis. And we immediately get to see metrics about our Cognos infrastructure.
Now, we have the number of reports, how many pages the reports generate, the number of query, query items, and filters that exist in all of these reports that are in our environment. The nice thing about this, if you have an environment that has multiple content stores, you can easily see the content within each of those content stores. You can look at them holistically or individually.
You can also view the stats of the content that’s stored in, you know, Towson is my folders.
With a simple click, yes or no, on the checkbox, will go through and say, now, I’m given a view of my content that is just in my team content store.
And the other thing that you can also take a look at it, if you have other folder structures.
You can potentially create some simple Power BI functions to create, you know, other drop-downs, if you’re looking for, maybe, you have a QA folder, or you have a another folder that you have put things in that are no longer being executed.
So, you can, you know, modify the output, you know, based upon the information that you have within your content store.
When we take a look at our report complexity groups, we, again, the reports themselves, We go through a complexity scoring, you know, trying to take a look at the level of effort it’s going to take to recreate this report, especially if you’re migrating to another platform.
We can also see how many times each of these reports in the previous slide were executed.
So, you also get some visibility into that, those metrics as well.
In this report, complexity group, we organize these groups by their complexity, you know, in the high, medium, low, or complex bucket.
And the idea is to be able to give you an idea as to the level of effort it’s going to take to recreate all of this content.
And again, right now, this is just purely looking at the content, as it exists in the team content structure without any other subsequent passes through the data, to try and, you know, figure out what other content might be there. Again, this is just purely at what’s out there.
But we could, you know, start to layer in the reports that have been there, the reports that have been executed, and the reports that have sabin executed in the last six months, nine months, to try and get a more cleaner focus as to what content is out there that is actively being utilized.
The next now that we’ve got is our Report Execution.
And again, this is more of a graphical view of the same content that was in the First Report Complexity, Tabb.
The difference here is, you can see, we’ve also included this Report View execution account as part of the metrics.
And what this allows us to do is, in the audit of, when a report is executed, if a report view is being executed behind the scenes, Cognos, and the audit is just tracking that report View’s execution, through our process, we go through and make the association back to that original report. So we know that, you know, all of the different report views that might exist are all surfacing from the same report.
So it gives you some insights into not having to recreate, you know, two reports or five reports, but just looking at that same one base report view that is being utilized as part of the execution.
So, the next thing we also take a look at, in the same thing that we did for our reports, We also do for our models.
And this one now allows us to focus on all of our FM packages that we have deployed and the components that were necessary to create each individual FM package.
We also, again, compute a complexity score on the models.
And this one, again, very similar to the reports, you know, we’re looking at how many tables are contained within each of these FM packages. How many joints, how many relationships, how many filters, how many query subjects, are trying to analyze that footprint of that model to come up with a complexity score.
And again, you can also see, you know, the number of data sources that are being utilized by the content, You know, how many reports are being referenced, whether it’s a query or a report, how many times those reports been executed.
And how many users of those reports are that package exist? So, in this, you know, looking at that, Go sell simple, again, simple data summary to users who have been executing reports 56 times.
So, it’ll give you some ideas as to, you know, the usability, you know, of packages that exist within, you know, within your infrastructure.
So, the next piece we do is, we take a look at this, this quadrant, and this, again, uses the complexity analysis. But this time, it’s looking at more of the, how complex are the models and how popular each of those models are based upon the number of report executions.
You know, ideally, when converting to a new platform, you want to start off with, you know, any package that has a low complexity, but has a high usage.
And this particular graph allows you to see very easily which packages fit that notion.
You know, and so, and this allows us, you know, if we’re doing a project, now we can start with this low complexity, high usage.
You know, and that’s to give us, you know, an easier development objective, but at the same time, given us a high impact, you know, to our end user community, for being able to convert over content.
And then the last piece that’s available through the initial summary book is just really our user summary.
And this is going to give you some insight into the user base that you have, and how frequent they are, utilizing the content that has been deployed within your content store.
You know, and, again, you have the content storage, you can have multiple, you can select them, you have a date range, you can, you know, just a simple slider to drag and drop across it to figure out, you know.
Any given timeline of how far back you want to look to see who’s been executing. And this is really good for, you know, any license shrimp, especially if you’re moving over to another platform and you’re looking to see how many licenses to buy.
You know, rather than just do a simple, you know, swap of, I have you know 100 Cognos licenses, I need, you know, 100 new licenses.
This can actually let you see how many are actively using it and be able to, you know, just, say, save some funds as you move to the new platform.
The next workbook that we talk about is getting down into more details, you know, for each of the migrations. And, you know, the first one was looking at reports and packages sort of independently with some execution counts, but now we start to get into the details. And this was where in this particular overview, we can simply select any given package, and it’s going to tell us, you know, how many reports that are out there. The different data sources that exist, you know, are the tables that the information is coming from.
And what type of data source are using?
Whether it’s SQL Server, Oracle, ODBC, Whatever type of connection we’re using, as well as within the FM package itself, you know, what namespace is this information being pulled from, you know, the query subjects that are being used and the tables that are being used as part of this particular package.
When we shift over to the Reports tab, you know, we start to get a deeper dive into each of the reports that exist.
And so in this particular one, I can see, for just one particular book we’ve got report, we’ve got 11 queries that are out there. We have 11 pages of data.
We have 33 query items, some filters.
And the nice part about this is this is all interactive.
I can simply click on a particular container that’s out there.
And it’s going to tell me, oh, what pages It’s located on, what queries are needed in order to produce the output and telling me what query items are also being utilized as part of that output. So, this is, gives you that recipe, as Todd saying, for being able to recreate the report. So, you have all of the metadata that’s coming from the package where you can find it within the package.
Any calculations that are being done on the report itself, come up in A clear, concise view, as well as being able to see any of the filters that are being applied, you know to any of the given queries.
And, again, the tables that are being utilized as part of this particular reporting output.
And the next tab against this one, again, is still focusing on executions.
But now, it’s the intersection that we can see for each particular package, we can see the number of reports that are being executed.
When were they last executed, how many executions and how many users are using them?
And this is, allows us to, you know, start that refinement of what is are true content that needs to be converted.
Now, without reports that haven’t been executed, you can see the content that’s there. How many are in that? that particular category?
And again, using the date range, you can start to navigate, to make it, you know, six months ago, nine months ago, a year ago, to be able to see what is actively being executed.
Then, the final piece that we have, as part of the base, is really dealing with the relationships that exists within the actual package itself.
And being able to take a look at the individual database tables that are part of FM model, being able to see and click in on any given table, and being able to see all of the relationships that exist.
What tables do, it doesn’t join to why or what tables or join from this one particular table.
So it gives you that insight to be able to recreate an FM package without physically having to have that package up and go through it line by line, and trace it back and forth.
This gives you a much cleaner approach in being able to information about your package, about the tables that are being used, and about the relationships that exist.
And, with that, I’ll turn it back over to you, Todd.
OK, thanks, Peter let me grab this screen here.
OK, so, what’s next?
If you are interested in what you saw today, you can find more information about the migration system on our website, including the sample dashboards that you just saw, some video demos, and some case studies.
We’d also be happy to speak with you about your unique situation. If you have any questions, please reach out.
Also, additional resources from Senturus. We’ve got hundreds of free resources on our website, in our knowledge center.
We’ve been committed to sharing our BI expertise for over a decade now.
Just go to senturus.com/resources.
We’ve also got upcoming webinars.
Thursday, October 27th, we’ve got Dataprep with Power BI versus Cognos and Tableau and on November third, we have an Agile Analytics for Cloud Cost Management.
So make sure you register for those if you’re interested.
Little background on Senturus. We concentrate on BI modernizations and migrate migrations across the entire BI stack.
We provide a full spectrum of BI services, training and power BI, Cognos, Tableau, Python, and Azure, and proprietary software to accelerate bimodal BI and migrations.
We particularly shine in hybrid BI environments.
We’ve been focused exclusively on business analytics for over 20 years.
Our team is large enough to meet all of your business analytic needs, yet small enough to provide personal attention.
We are also hiring.
If you’re interested in joining us, we’re looking for the following positions, Senior Microsoft BI Consultant and the Managing Consultant.
Please e-mail us at [email protected] if you’re interested and check out the job descriptions on our website.
On the URL posted there.
And then finally, some Q and A, if you have any questions, go ahead and post them into the Q&A section of the GoToWebinar panel and we can go ahead and take a look at this.
As I said before, if we don’t get to your question today, we can always post those, in addition to the Deck, which will be on our website.
I don’t see any questions in the panel, so quiet curve today.
Here’s a question, we’re using Power player for us as an inventory work for power Play.
That is a good question. I don’t know offhand. I don’t know if anyone on the line knows for sure.
If not, I will find out and post a response to that question just for future reference.
I know that tool is still around, but being deprecated at some point. But let me check and see, I know we do get other legacy content, like Query Studio in there.
So, let me see if that’s also captured, but we haven’t, I don’t personally use it in our sandbox environment, but I will find out.
Anyone else have any questions?
Just one quick point of clarification on that Powerplay question, for Tina. So I’m not sure, offhand, if the tool pulls Powerplay Studio Reports, but it definitely doesn’t handle the older standalone. Or if you’re using the old Powerplay fat client, I would say the answer is no.
Powerplay studio, maybe, and as Todd said, we’ll have to check them out for you.
Yeah, and that kind of feeds into, the question just came in. You are using the contents or database to build these metrics. And the answer is yes, so, if the data is in the contents or database, there’s a good chance we can get it out.
And if you’ve ever looked at those tables, essentially, you know, every single report, and the FM model itself are XML code.
So basically we just kind of decode that.
So if you have it and it’s saved in your Cognos content store database, it’s going to be in there somewhere.
It might just be a little bit different logic or the way the XML structured, if it’s, you know, outside of the sort of standard reports and models and things that we typically work with.
But, I will find out one way or the other.
Does this work for IBM Cloud? Unfortunately, no because that is completely locked down with the IBM Cognos Cloud. You don’t have access to the server or the database, they don’t give you access to that.
Scott, any additional insight on that.
Hey, I’m with IBM cloud there’s a couple of ways to work around it. There are some export functionalities where we can export and do an import.
It doesn’t require a little bit of customization on our end, but we can get some are most of the same metadata.
Another question: does this tool need to be on the Cognos server or any desktop? It doesn’t have to be on the Cognos server, it can be anywhere.
It basically just needs to be a place where, it’s going to set up a local SQL Server, instance the right to it needs to be able to read against your content, store database. And just run some scripts. So, it’s nothing to substantial just as long as they can read from your Cognos content store with, you know, some kind of database connection string and then, be able to write to a new SQL server database.
Those are really the only main requirements, and because any more specs if you’re interested just shoot us a note offline, but yeah, it’s pretty low key setup As far as what you need to do to get this, you know loaded.
One addition to that is, we can, especially for the assessment, the summary reports, we can run those in our cloud as well, with the backups of the content stored in the other database.
So, we don’t actually need to put, use, put, the migration assistant in the client environment in order to generate the reports, and then when the report comes back to the client, it comes back in the form of a Power BI desktop, um, package, so that you’re, you know, you don’t need an extra license for that. And all the data is contained in the report package itself. So that it’s fairly portable on the way back to the assessment itself.
And maybe this is a good time to just call out that, when we engage with clients, we have a kind of a number of different ways to engage.
But one of the ways that we engage is by doing a 1 to 3 week assessment.
And during that assessment, the first part of it is to ingest the metadata.
And then we analyze it. Peter is one of those folks who does a lot of analyzing the data, once it’s in the reports themselves, and then we present back to our clients what we found, and then deliver the Power BI package back to them.
So, you do have the option for portability for basically us to generate the inventory for you and then deliver it back to you.
Another question here about what happens when Cognos upgrade is done. Do you have to upgrade the tool? The answer is no. Again, it’s going to with an upgrade, typically, it’s going to upgrade the content store and some of the objects in there.
You can go ahead and just rerun the inventory process.
It’ll repopulate the data and you can get fresher results but there’s nothing in the tool itself that we need to be upgraded. It’s more of just refreshing your content.
Question in the case of duplicates of packages and objects, is this tool have the ability to delete a duplicate, so this tool isn’t going to make any changes to your content. It is strictly informational. We’re not touching anything in that Cognos store database. It’s just to give you the information to decide on what you want to keep delete, you know, decide to leave behind on a migration, etc.
But we definitely do not touch anything, in fact, gets a read only connection to your content or database, just to get information out.
We then create and populate our own tables, which drive the workbook that Peter was, was demoing a little while ago.
One of the other things I’ll add in here is that in very large, organizations that have thousands or tens of thousands of reports, we may employ other automation tools in order to manage content.
So, if you want to break up the work into the discovery and design of your project, whether it’s a cleanup project or a migration project, you first need to run your inventory, collect all the information about all the reports and packages, then you design what it is that you intend to do.
Then, you get to choose, are you going to do that by hand Or, are you going to do it in small phases or, would you like either to build some custom automation, too?
Save yourself some time. Basically, clicking thousands and thousands of buttons. Or, there are some tools on the on the market that will help you manage your content, your Cognos content in bulk, really, the design phase.
We find that to be the most critical, the secondary phase, you really have options, and that’s largely a function of How many changes do you really need to make? and how long would it take a person to do it by hand? And, is it cost effective to possibly, know, buy a small content automation tool that will help you make those changes automatically, if they can’t be.
So, you get up. You get options at that second stage for automation, as well.
We’ve done projects both ways.
One more question.
Would this product work for Tableau analysis, or as a Cognos specific? So, right now, it is Cognos specific.
We are looking into other ways to possibly build upon this.
But, today, if it’s just the Cognos content itself, that would be stripped out and presented to you in the workbooks that we demo a little while ago.
But, I noticed Arjun that ask some questions about licensing costs. There’s a lot of options there. I put in the chat a link to my calendar.
Feel free to reach out to me if you’d like to talk about the different options, We’re pretty flexible with how we engage with this. We think that you could certainly license it from us directly, but the real value comes in us collaborating on the use of the product. So, we have an assessment that we can run, and those assessments started about 10,995. This is our base price for the assessment, and with that, it will give you all the things that I described earlier. So, you can get, basically, all the reports plus more, and a database to query, and a platform to build custom reports into for $10000.
That’s the, kind of the starting place, and that content will be yours at the end of that assessment, as well as our analysis and recommendations.
All right, I don’t think has any other questions coming in. I will leave it open for just a couple more minutes.
If you do have a question, go ahead and just put it in there, and we can follow up offline. But wanted to thank everyone for joining us today.