Preparing for a BI Migration

You wouldn’t renovate your home without a well thought out plan. Good planning up front saves time and money; changes are easier to make in the drafting stage than once construction has started. The same principles hold true for a BI migration. However, organizations often get stuck in the scoping stage, overwhelmed by the vastness and complexity of the project.

In this on-demand webinar, we take you through our proven methodology for gaining the clarity needed to accurately scope and execute an analytics migration. This time-tested process ensures migrations come in on time, on budget and at significantly lowered risk.

You will come away understanding what it takes to answer these questions:

  • How much will it cost?
  • How long will it take?
  • What resources will I need?
  • What gets moved over? What doesn’t? What changes?

Whether you are changing back-end data sources, upgrading, changing platforms, or migrating ERP/source systems, you can proceed with confidence, understanding what it takes to fully prepare for your project.


Michael Weinhauer
Senturus, Inc.

Michael has been designing, delivering and selling analytics solutions for over 28 years. His team created the Senturus Analytics Connector, which lets Tableau and Power BI use Cognos as a data source. Prior to Senturus, he gained a wealth of hands-on, practical BI and big data experience at Oracle, IBM and SAP.

Machine transcript

Welcome to the latest installment of the Senturus Knowledge series.

Today we’re going to be discussing the topic of how to prepare for a BI migration.

As usual before we get into the presentation itself, we invite you to and add questions or enter questions via the GoToWebinar control panel, per the graphic that you see on the screen. You can minimize or restart using the orange arrow.

We do have all the microphones muted to out of courtesy to our speakers, but we do encourage you to answer those questions.

We will answer the questions at the end of the webinar. Some of staff may answer your question in line. Or if you ask a question, we don’t answer it right away. We’re generally able to respond to questions during the webinar.

But if we’re unable to cover it in the Q and A section, we will provide a written response document that we will post on send

Which always brings us to the question of Can I get a copy of the presentation?

And the answer is An unequivocal yes. Absolutely.

It is available on under the Resources tab and then head over to the newly named renamed from the Resource Center to the Knowledge Center.

Alternatively, if you go to the chat in the GoToWebinar control panel, you can click on that, and it will take you directly to it.

And that’s where we’ll also post a recording of the webinar and the aforementioned question logs should that be necessary.

Why you are there you’ll see our newly designed website. We’ve made it much more friendly and accessible and modern, so go ahead and check that out and bookmark it.

So, today’s agenda, after a quick introduction, we’ll set the stage with the migration challenge, and then we’ll get into the core of the presentation around how to scope a BI migration.

We’ll discuss a few common types of migrations that we see.

And then we’ll talk about the first three steps in the Senturus methodology around assessment, roadmap, and optimize.

And then we’ll do a brief Senturus overview and great free, additional resources, and make sure you stick around at the end for the always informative Q and A. Now, I’ll warn you upfront that this is a dry presentation, and I apologize for that process is just not exciting. And frankly, you don’t want an exciting process, kind of like when you go to the doctor, you don’t want an exciting procedure.

You want it to go quickly, and painlessly and have a good outcome.

Since it is rather boring, I reserve the right to pepper in some jokes since I’m not boring. In fact, my friends call me the mushroom because I’m such a fun guy.

Nobody’s live on the microphone. So, anyway. We offer you a full refund on the price of this webinar if you’re not fully satisfied.

So, your comedian, MC, and presenter today is myself, Mike Weinhaur. I’m a director here at Senturus among my numerous roles.

I’ve been in BI for over 25 years selling, demonstrating, marketing and implementing various BI solutions, for Cognos, Tableau, Power BI, the spectrum. And I also emcee and deliver some of these webinars as you can see.

So, before we get into the core of the presentation, we usually have a poll, and today it’s no different.

So, the question for you today is, What are you planning to migrate?

Are you looking to migrate in your organization at a data source, meeting, a source of the truth, like an ERP system, a transactional system, CRM, et cetera?

Are you looking to migrate a data store meaning a place where you land a source, or sources, via ETL, like a data lake, an ODS, something that you might use for analytical purposes? Are you migrating an analytics platform?

And that means, are you moving from Cognos or Power BI or whatever? Or, even a major upgrade can often be a migration. I’m still a little bit of my own thunder here, but, or is it something else or do you not have a migration plan?

So go ahead, and we’ve got about 50% of you that have voted here, so I’ll go ahead and get that vote in.

I get those percentages up a little higher.

OK, I’m going to go ahead and close this out and share it back with y’all.

Hopefully you can see this. About 15% are looking at data source, so that should be like, ERP, CRM type thing.

More than half doing the analytics platform.

And then a quarter don’t have a migration plan, which is interesting, because you’re on a migration webinar, OK, cool. Thank you very much for sharing your thoughts there. Always good to know.

OK, so, the migration challenge, setting the stage share migrations are inherently challenging.

They’re big and complex, and they’re risky.

Gartner in 2011 showed that a whopping 80% of BI projects fail.

And as of 2018, it dropped to around 60%, but that’s still terrible, right? I mean, if I were going into a project knowing that I had better than 50% odds of it, failing, I’d be terrified. And BI adoption rates still hover around a fairly abysmal 25%.

So migrations are one of the costliest, riskiest projects an organization can undertake.

Firms lack the skill sets to successfully accomplish migrations. Skills are unique to migrations and not core competencies for most organizations.

And the skills are needed in all the tools and the applications and environments.

And then they require a lot of time and bandwidth. They’re very resource intensive.

They can be very expensive, and they’re risky.

There’s a high business impact of downtime, old system that the business rely on, can’t be turned off until the new system is up and running.

So you see a lot of parallel execution, and, again, there’s the risk of a failure, or just over time and budget, or low adoption.

So, big and scary, right?

Well, we’ll try to make that a little less big, scary, and risky for you.

So, when properly done a good BI system, shields the end users from the underlying complexity in an inaccuracy of accessing raw data directly, right, cryptic source systems big, data warehouses, complex ETL, transformations, etc.

And a lot of effort goes into that, and the best systems make that look easy, but those of you who are here, certainly know and are in the trenches. Certainly know that the bad news is it is that it’s definitely there.

And it’s even worse, right?

When you look at these environments, there are tens to hundreds of models.

Tons of packages, whether those are Cognos packages or ETL packages, dozens, hundreds of data sources, in some cases, thousands, tens of thousands, Millions in a few cases. I dare say, of reports.

They’re multi-layered, duplicate and overlapping reports and metadata. So, over time, just like your garage and especially your mind, you kind of collect a lot of things and it gets hard to find things. And it’s not real fun to go in.

There are a lot of times, and the models that have grown over the years, and nobody really understands them today, that’s particularly germane to the Cognos scenario.

A lot of complex data movement enhancement that creates blind spots in the data, right?

Because you’re only pulling select data over, otherwise known as dark data, creates latency, and potential inaccuracies or distortions based upon how data is presented based upon business logic. And how that gets implemented in those ETL processes.

Even when standards are in play.

Like SQL, tools infrequently implement them exactly the same way, which makes migrations tricky.

And years of use have resulted in bloated complex and messy environments that are slow and costly, and they frustrate end users and then there are almost there precipitated as well by merger and acquisition activity, that further complicates all of that stuff, right? You might have multiple BI solutions, more data sources.

With all those previously mentioned issues started being compounded by that.

Then, there’s platform and data source migrations where, for example, you’re moving to the cloud. So then you have a whole environment. That’s that comes into play there in addition, again, to the aforementioned complexities.

And then when you’re talking about, there’s always that temptation.

And we see people doing it all too often to just, let’s just take the old system, whatever. that is, the database, the platform. We’re just going to pick it up. And, we’re going to move it over to the new system and put it in there, like for like, right?

And, of course, that’s almost always a bad idea, right?

When you move homes, do you pack everything up and take it with you, will, of course, not. Hopefully, you don’t. Otherwise, you’re moving truck, looks like this port truck over here, that’s about to tip over.

The failure to go through.

The upfront scoping and assessment process leads to bloated migrations, suboptimal accessibility and performance unhappy users and LoR adoption.

Not to mention, you’re not taking advantage. oftentimes.

The new features, the whole reason that you’re moving to a new platform is to really maybe enable self-service or, you know, some of the visualization capabilities.

Yes, you can simply virtualize a virtual machine to the cloud successfully, and that might work, but doesn’t really saw many underlying problems of bloat, lack of self-service, etc, etc.

In fact, one of Senturus’s biggest projects with a major utility was a follow on to a failed lift and shift with a client that was migrating from Cognos to business objects using our methodology.

And even that was subbed optimal, as the client had very junior resources on the project.

And opted for this lift and shift from the old to new, which created complexity, some of which I’ll elaborate on.

So, some of the other challenges are, you first have to, to scope and complete your migration project, you really have to unravel all the content, the data sources, the models, the reports, identify the usage, the frequency, and all that sort of stuff.

And that’s very time consuming.

There’s a lot of inaccuracies that can be introduced content, maybe difficult or impossible to access? Think about personal folders or my folders in Cognos? That stuff’s really hard to get at, and can be very bloated.

With a ton of redundancy, there is no consolidated view of this information about the system, much less at the got a granular, enough level for you to properly make an assessment around what it’s going to take, and ultimately, you know, resource and time and budget correctly.

And what ends up usually happening is you’ve got a bunch of people diving into the existing system, tearing it apart, put it as stuff, an intermediary Excel files, and then someone else takes that and has to map it into the new, the new tool. Which may or may not work exactly the same way as, Or, likely, doesn’t act the same way.

Then, finally, audit data may be limited or very limited in its scope or its or its grain.

So, it leaves you with this, sort of, where do I even start, what is that process?

And how much will this cost?

You have no idea, is it going to be $20,000 or $2 million, and how do I quantify those benefits, right? what are the real benefits? What are the hard dollar benefits?

What are the softer benefits around customer satisfaction, user adoption? That sort of thing.

And then how do you ascertain? do the benefits, outweigh the costs?

What are the resources that I need, hardware, software, People, training administration.

And, you know, sort of how big is this project is two months or two years, right? And there’s this sort of very wide spectrum with a ton of uncertainty.

What components get redesigned, moved, or just archived or deleted?

What are the risks? Ensuring business continuity, since BI is mission critical, failure and downtime really isn’t an option. What are your contingencies there?

How do you handle things like coexistence and to what extent?

What do you have your business goals defined?

How do I enable self-service and ensure high levels of adoption of the new system?

Again, OSPI rates, adoption rates hover around 25%, and, of course, things like governance and data security.

So, it leaves you like this poor woman here that’s just frazzled, who sees all her weekends for the foreseeable future and she won’t see your kids. For the next few years.

It’s stressful. Right?

So, hopefully, I’m not driving all of your anxiety up with this, but trust me, we have some good answers for you, that, hopefully, I’ll leave you walking away and lower your blood pressure, ultimately, So, you have to ascertain.

First of all, what is in scope, right?

Here’s a visual of kind of the entire BI process from the requirements gathering to the solution architecture to the data model design, which this, you know, you could break down any of these into, this could be your source and your store with the ETL in it, and then there’s modelling that goes on top of that.

The design of the content, the development of that, user acceptance testing, and all that and whatnot, governance and security, and then, ultimately, rollout, which involves training and all that sort of stuff. So, how much of this stuff are you tackling based upon your business goals and your use case?

Right? You’re always going to have requirements.

Hopefully, if you’re doing this right, you may have more or less of this stuff.

You’ll always have this, depending on what type of BI migration that you’re doing.

And, this is really important because another example that I have is it’s interest in working with this large utility that what’s going from Cognos to Business Objects after a failed lift and shift.

This is from 1 of our consultants there were, numerous instances that they identified with reports were enhancements, and the ETL would have greatly aided in the report development.

The client wouldn’t give us the option to do that, and ETL was really off limits, and that database was designed for business objects. And things behave differently in Cognos.

It’s a different architecture, and it made the reports have to do more processing that was necessary and impacted performance.

So when you’re trying to scope things correctly, you need to understand that if you’re changing platforms, that might necessitate changes upstream in terms of how you structure that data.

So, a methodology is really critical.

A proven methodology ideally can really help you reduce risk, cost, time, and effort.

And all the steps are important.

Again, you see a lot of people that they, they may look at all the things up front here and just throw their hands up and go, all right, you know, we don’t have the time or the money to do this stuff. Let’s just go do it and we’ll figure it out on the way.

And you know, we’re telling you that that’s a recipe for potential.

Disaster, big problems, cost overruns, you know, over time over budget, black swans, all the doom and gloom, the better the upfront effort to follow this roadmap to assess, create a roadmap, optimize the better your execution will be and the more success that you’ll have.

As I like to say, Frank Lloyd Wright famously said, it’s easier to make change in pencil on the drawing board, than with a wrecking bar at the work site.

That’s core to start paraphrasing.

And, as promised, another joke, Never trust somebody using graph paper, because they’re probably plodding something.

Certain aspects do benefit from automation, right.

It can really help and that’s something we’ve learned over the years.

And we’ve developed some tools and I’ll talk to those later. But some of those things around the assessments are really, like, I said, they’re time consuming and manual.

And you can really help to reduce that time, reduce the risk at a much better handle on, on that environment before you jump in and do it.

You can make the scoping decisions based on data, real, comprehensive, accurate, granular data that’s presented to you in sort of a unified format. And I’ll talk a little bit more about that later.

And we’re going to talk about the three common scenarios that we see at Senturus that we’ve helped with.

The first one is a data source change. Now, again, the source status.

A transactional system, usually.

ERP, CRM, other transactional data sources, generally, systems of record.

You might be moving from one database to another underlying that, Moving from on perm to cloud migrating, or just a major upgrade.

And, of course, that can have ETL considerations as you push that data downstream, right? There’s your operational reporting versus your analytical reporting. A lot of operational reporting might be built into that, that transactional system, or micro direct against that transactional database.

That will drive everything downstream from that. There’s still the need to inventory, the reports and the models that rely on the legacy data source.

There’s a need to combine that inventory, then, with usage statistics.

Need to capture tables and columns and use kind of a lineage that flows from those reports down to that data source and then understand the dependencies and or capability gaps between those data stores. Right. So, you want to be able to check for function compatibility checking.

And that helps you to then minimize the project scope by focusing on high priority objects and eliminate the unused objects. I have two examples that Senturus was involved in, so, a major oil and gas company.

They are in the Bay Area and did an upgrade of their Rebus system and we’re implementing SAP Hana.

And this necessitated a downstream redesign of the DW they had built a couple of years earlier. And we had, we were tasked with merging the new and the old data to allow the client to preserve and analyse history.

Alright, so, you can see where there’s changes in the underlying schema that requires changes to the ETL and landed in the data warehouse, which they’ve already built reporting on to minimize the disruption downstream and the downtime for those users.

And then, we’ve worked with a with an apparel manufacturer, also here in the Bay Area, where they migrated their planning system from Cognos Enterprise Planning to Vanguard.

And I used SAP Hana.

So they had to integrate again into an existing warehouse, so as to preserve existing models and reporting downstream. So two examples of where they changed that backend system.

and that had all these ripple effects.

But the whole idea was to be able to let them report on history and to also preserve and preclude having to redo all of the semantic modeling and the reporting downstream from that the company relied on.

So, they manage their cost and risk, and all that sort of stuff.

Data source changes is the second one that we see here, And that’s where you’re talking, again, kind of, moving left to right, in that process diagram.

These are your analytical stores.

And that’s where you’re really looking at the analytic reports.

And again, here you have to identify, again, inventory the reports and models that rely on that source, combine the inventory and usage statistics, capture the lineage, the tables and columns used in that.

Do some of that function, compatibility checking and potentially make ETL changes.

You’re going to prioritize high priority objects, eliminate unused rates. There’s a lot of parallels between that first one.

And we actually did this at a major financial institution, a major bank, where they migrated from Teradata to Google Big Query.

And so they were moving on perm to Cloud, which has its own considerations.

And there were SQL function, differences between those two data sources, fairly significant ones, and they had over 20,000 reports and 400 packages.

We used one of our tools that allowed us to search for all those SQL functions that we knew were incompatible.

And we could go through into the reports and metadata and fix those without having to hunt them down individually.

And in terms of rationalizing those reports, we looked at recency and we took the ones that had been run in the last six months, and just focused on those and archive of the rest of them or got rid of them. So it was faster. It was more streamlined and the end result was a system that was a lot more optimized.

Now, the third one is the most common one, as you saw, and 50% of the folks responded to that. The poll said, they were doing platform migrations.

We define a platform migration fairly broadly in that it could be from one tool to another.

Maybe you’re going from Cognos to Power BI or whatever, one tool to another business object, Cognos. Something like that? Or, you might be just migrating a portion of that tool.

We see a lot of that, I pick on Cognos a lot here because we see people say, oh, I’m going to get off the system entirely.

And then, they realize, wow, I have invoices and billing statements, and, and all these pixel perfect reports that I send out and I can’t do that in Tableau.

Or, I have to use Power BI and Power BI reports in s.s.r.s. to really accomplish that. So, maybe I leave the stuff that’s running over there, and Cognos and it bursts and it works, I’m going to leave that alone.

But these new areas I’m going to do that in Power BI or Tableau or Cliq or whatever it is that you’re going to use. So you might just do a portion of it, right, or a department or a subject area.

You might even be migrating between major versions, even if it’s on the same platform. If it’s a big upgrade, it’s effectively a migration, especially if you want to take advantage.

You’ve either got things that are deprecated in the old tool, products that cease to exist, or you want to take advantage of improved capability performance, functionality features in the new platform.

Then there’s a million things in between that, right? You might do multiples of those. You might do all three of those use cases. You might then, or some kind of, you know, it’s kind of a Venn diagram and you’ll have overlapping stuff.

You’re going to hear you need to identify the source data, inventory, the reports, and the design elements in those reports. Here, it starts to get a little more complicated.

Because you’re, you’re not doing kind of a, you know, if you’re going from Cognos to Cognos and Report Studio to Report Studio, you can be fairly assured that our reports to your report will work And report studio between versions or they’ll be nominal changes.

But if you’re going into a different tool, that’s a whole other ball of wax needed to understand all those pieces, right. What are my chart types?

What are the items and in the reports, how are you going to rebuild those?

And then, of course, you need to identify the most, the most used reports, as well as which ones aren’t used and which ones are rarely used. And that sort of recency and frequency, right?

They use the report a lot, but they have used it in the last six months, or what decisions do they make off of it? Do they just dump it out to Excel? Or is this something that, you know, even though they run it once a month, it’s really important.

So, there’s a lot of factors to consider, but it all comes back to getting a good inventory of that, understanding what’s there, who’s using it, who’s not using it. And then we’re going to then do a feature, mapping a capability mapping. Right?

And so, a good example of that is Framework Manager versus Tableau and Power BI models.

Those of you who know Framework Manager, know it’s old and pretty ugly, but it’s very robust and sophisticated. It was designed to handle enterprise models.

Tableau is a beautiful tool, but it’s not built to handle enterprise models. They’re used to pulling in spreadsheets.

In the last year or so they introduced one layer of abstraction. Otherwise, everything else before was just flattened out into one kind of big table.

Power BI models allowed for a little more sophistication and there’s a lot of power in there, but they’re structured different. So you have to figure all that stuff out. What do I have in this tool versus what do I have in the new tool?

Sometimes there’s more capability in the new tool.

Sometimes there’s stuff that you can’t do, like parameter maps or macros in Framework Manager. You might not be able to do that. Or you have to do it differently.

So it’s important to understand that, on the metadata, the database, the reporting side, the whole stack.

And another great example is pixel perfect versus dashboards. Can you even do that?

How do you handle ad hoc queries?

And so in the Cognos side, reports should be on Query Studios. Query Studios is deprecated. And then are you going to move to Cognos 11.11.2?

You probably want to take advantage of explorations and dashboards there. And make those things more visual. So, you have to understand those capabilities. It’s a really important step.

The last step is optimizing it, identify rarely used unused reports, identify licenses, trink models, and enable self-service.

Result of that is happier users lower cost, and increased adoption.

And, an example of a couple of examples of that, its interests are where we worked with a regional bank, kind of a mid-size California bank where they were upgrading Cognos a major version upgrade.

And they had there was allegedly a migration utility in that, but that tool failed, and could only handle about 404,000 reports, about 10%.

So, they brought Senturus on, and we rationalize those reports by 75%.

So, we took that 4000 down to 1000 and we will see compression of 5 to 1 or more.

In most cases I would argue you get up to 10 or more.

There’s so much redundancy, and then we leveraged the modern capabilities offered by the upgrade solution to create reports that allowed multiple different static reports to be replaced by one dynamic report, and that’s what you’ll see a lot of here and you need to.

It’s really pays to go through that exercise for so many reasons. And another large automotive parts distributor leverages Senturus to really streamline and optimize that Cognos platform.

OK, so we’ve talked about those three cases now, and let’s now get into how do we make this real and how that translates into practical steps.

So, I’m going to go through these step by step.

And then I’ll end and go into details on each one and we’ll talk about the specific questions that you want to ask.

So before we go into that, I have another joke for you: After an unsuccessful harvest why did the farmer decide to try a career in music?

He got a ton of sick beats.

I’m killing myself over here. Alright. Moving on, the assessment phase, this is a big one. This is arguably the biggest net of execution.

Because there’s a lot of steps, it’s really important, and it can be very, resource intensive.

But it’s such a critical step in charge of understanding what you have in terms of content, users, environment.

What are the capabilities, get a rough order cost, and a rough order project duration.

Alright, so drilling into content, it’s, again, that, what do I have, what do I use, what isn’t used, and what is redundant?

So, this isn’t right, anybody looking at this? It’s not rocket science.

In terms of what gets used.

What gets used the least, that recency and frequency, and there’s another important component to this that’s important to understand, is the complexity of those reports.

I used the 80/20 rule. I’ve identified my most use reports that have been used recently.

But, then, how complex are those reports?

Are they simple list reports or are they these 10 page briefing books with custom SQL. And you can’t get that very easily.

That spreadsheet come into play.

Even bucketing those into sort of simple average, complex.

And databases, those dependencies and how does that flow upstream and downstream through your systems.

And this will drive prioritization and inform the optimize stage where you move, redesign, and or archive or delete existing content.

So we’re working with another very large software company out here on the West Coast where they literally have millions of reports, many of which are in my folders for different clients, and that are just permutations of those reports. So when they originally estimated this, they came up with something like Skype, you can chime in and it’s something like it was like 150 man years.

It was literally not doable and so we have I’ll talk about a little more about it later but it’s really not doable. We got that down to something where they could actually you know do it in someone’s lifetime.

The major utility I talked about had thousands of reports and literally 50% of them they said were duplicates. So talk about compression right and getting rid of bloat and stuff like that.

It’s really important to understand how big is that breadbox for the users, who uses the system at all?

How much do they use, and who doesn’t, So that’s coming down to kind of licensing, right?

You want to optimize for that, and that recency frequency duration thing comes into play.

If they don’t use it very often, maybe you can push something out to them, and you take them off the system, you’re not paying a lot for licensing, you, streamline the licensing, reduce your cost environment, right? You have to look at that, or you’re probably on prem. Are you going to move to the cloud or you’re going to have a hybrid depending on GDPR, for whatever reason, you’re going to have a combination of those things.

Look at your current capacity and sizing. What do you have in terms of hardware or Cloud capacity, Whatever that is? What does that sizing what do you have in terms of High Availability and disaster recovery? If anything.

So, understanding that is really important to understanding the environment.

The capability map, as I, as I talked about, is really, really important. That current versus target system.

The data source, in terms of the data types functions. The type of storage, maybe? Right. Row store versus a columnar store.

Um, things like that. The modelling capabilities, again, I talked about, that, that big differences exist between the tools.

The reporting and visualization capability, so, again, moving left to right down, that BI pipeline are the upstream and downstream differences and capabilities in the new tool, including distribution and consumption? So, mobile is kind of a table stake nowadays. Bursting Cognos Disbursing a lot of tools. They don’t do bursting, How are you going to handle that? That’s a huge automation thing that Cognos kind of known for does really well?

What are you going to do in the, in the new tool around that, and then, of course, the application of security and governance, so that you can access the data.

You know, the people who can access the data can, people who shouldn’t can’t.

And another client example here is, is a state government agency.

The client did the scoping for the migration for this move from Cognos to Jomo and the problem was since they did the scoping and kind of brought us in later. And, you know, we’ll do that, right? Well, we’ll take a look at it and do it on sort of a type of materials basis, generally.

But, the problem was that many reports required significant rework and design changes in order to fit into Domos architecture. It’s a very different architecture.

You get to put the data into the cloud in Domos system and then build the content on top of that. And, everything is really very different. So, understanding both those tools, understanding what’s different, that really helps you understand, who am I going to use to do this, right?

I need a Cognos in a normal person, or someone who understands both as we had in our case And then, what’s it going to cost, and how long is it going to take if you sort of say, Oh, yeah. We just drop it in the demo, right? You would you’re going to grossly you’re going to probably get the wrong person.

You’re going to under resource it, under-price it and shorten the time that you need to actually do the job, right.

Which comes to the cost portion.

Are you really here?

Just trying to get, again, this is the first stage, where you’re trying to get a rough estimate, a very rough estimate, that complexity storks scoring that I talked about, can help make this a lot more accurate and that capability map will really help that make that make that more accurate.

And another example is, again, that the parts distributor automotive parts distributor that we’re working with, they were looking to, they use Cognos as their system, and they had a group that had built a Power BI dashboard.

And the IT group brought us in two, say, Hey, we have cargo’s data modules and dashboards.

How close can you get to this Power BI dashboard, so we can try to pull them back into the fold.

And so we help them assess that. And that really is that sort of we helped them with this.

This go no-go decision, right? Do you let them keep doing this job, Do you pull them back in, and that was sort of an example of that.

But again, that cost is meant to be rough.

Kind of go no, go again, saying this is going to be, we know, it’s not 2000, we know it’s not 20,000, we know it’s a two million.

We know it’s going to be about $100,000.

And does that outweigh the, the, the costs outweigh the benefits, right?

So that’s where that comes from.

Alright, so, Stage two, another joke to wake up.

So my wife asked me the other day where I got so much candy.

I said, well, I always keep a few tricks up my sleeve.

OK, so, the fluids are all dad jokes, yes. So, the and then up the project duration, right? We want to get to the, the approximate duration of the project again, for budgetary rough scheduling purposes, right? When are we going to do this?

How long it’s going to take, a big as that breadbox, two months or two years. I have to laugh at my own jokes. Nobody else does.

Um, and this timing might be imposed by other factors, right? Yes.

You might not have the luxury of, of doing it, kind of when you want to. But you might be, it might be forced by license renewal or obsolescence.

For example, this, this government agency that we worked with that was something where they didn’t want to liberate, renew the licenses. So we had to kind of, kind of rush that.

All right, So moving on now to the roadmap.

And I’ll give you another joke here, since I was supposed to that.

This was when I was going to do it, But, I really, I hate my job. All I do is crushed cans. It’s, it’s so depressing.

So depressing.

Alright, So the roadmap phase consists of four components: prioritization, resourcing, phasing and timing, and then specific costs, right? So now you’ve gotten your arms around the content and the environment and all those things we just talked about, right?

That’s a big lift.

It’s a big deal, but if you do this, right, the next two steps, here are these next few steps around the roadmap, And the optimization are a lot easier.

They’re clear, easy to zero in on, more specific, so here, you’re going to prioritize, again, into prioritization, you’re going to prioritize that highly used business critical content, right?

And that’s, again, having that recency frequency kind of information from your audits, from from your assessment that says, Who’s using what.

You can prioritize the things that are most important. Business critical are things that people just rely on to make good decisions.

And then you can get into now, you can get into, how are you going to resource this?

And this is a big deal, right? Because there’s technical resources, business resources, and then what I’ll call in environment, resources, Hardware and software.

And from the technical perspective, you’ve got Architects’, system admins, modelers, DBAs, report authors.

And that’s going to vary depending on that type of migration and the tools you’re using, and all that sort of stuff. The better you can zero in on that, you get the right people for the job.

It’s critical that you have the right business, People involved. This whole project, ideally, should be driven by the business with business requirements.

And executive sponsorship to drive a culture of self-service, too.

To prioritize the project.

But you’re also going to have end users, other stakeholders, maybe key partners and vendors, strategic vendors and stuff like that. Training is another resource.

Understanding the old, understanding the capabilities between the two tools the trainers need to do that and be able to map from the old to new terminology. Cognos has specific terminology. Power BI and Tableau use different terminology.

And an example of that is a major utility that we worked at.

We worked with the users to help them rebuild their top 2 or 3 reports, And then we would cut them loose, and then we further encouraged them to look for existing reports, and then tweak those. Instead of what they have been doing, is, they would just go create them from scratch. You can imagine, a huge waste of time.

Or, better yet, just use the report that’s available in the public folders area that’s available to everybody, instead of building your own, maybe, use a prompt, or a filter, or export it, or something.

So, training is really important.

And then, the environment, you’ve done the assessment of what you have, but then, where are you going with it? Are you going to stay on prem, You’re going to go to cloud hybrid?

Then, when you look at your capacity and sizing, is this optimal? Are you taking advantage of things like elastic sizing in the cloud? For example, during peak or non peak periods?

Are you taking advantage of hot versus cold data storage, hot data storage? You want it to be fast, but it’s expensive.

Whole data storage, it’s not as fast. But, it’s a lot cheaper, and you can handle a lot more data.

And there’s more modern ways to store data and access it.

Right columnar stores are way better in memory, Databases, and things really help you with that sort of stuff.

It used to be that you would size things.

You’d have to buy a box or a bunch of boxes, and you’d say, well, what, you know, planning of FEMA as to close the books at the end of each quarter. So, they need a box that’s going to handle this maximum amount of performance. They’ve had this big box.

And then, 99% of the time, the things sits idle, you’re on the cloud. You don’t have to do that.

You can scale up the resource when you need it and scale it down when you don’t and save yourself money, and really use, really harness the advantages of the cloud, factoring in growth and performance needs and, of course, the aforementioned, high availability and disaster recovery.

Scott, did you have something to say about that?

I have a lot of these discussions with folks who are getting ready to do a large migration.

And sometimes, it seems that all these details start to get a bit overwhelming.

And it starts to seem impossible.

There’s so many metaphors you can use. one of the metaphors I like to use is that it’s really important what order you do things in.

So imagine you have a thousand piece, jigsaw puzzle.

First you have to open the box, and then you have to take a piece. And you have to look at it down on the table.

And then you can take another piece, and you can look at that, Well, if those two pieces fit together, great, but if they don’t, what do you do with that second piece?

Do you set it in the corner? Do you organize them by color? You organize them by shape.

Do you dump the box out so the whole idea of dealing with all this complexity There’s an inherent way or a process that most people who’ve ever done a jigsaw puzzle know you sort the pieces before you start, what’s the most common way to piece out the jigsaw puzzle.

You find the edge pieces first, and you build the edges.

And so similar thing kind of emerges when we talk about migrations, which is to try to understand, things that are alike and things that are different, and things that are complex, and things that are simplistic and things that can be combined and things that can be archived.

And so there’s a big process around being able to see all of the information inside the migration task itself, and then being able to sort it.

If you take anything away from this webinar and these types of conversations, one of the things that you really want to keep in your mind as you’re looking at resourcing and the prioritization roadmap steps, and all the other detailed steps that we have.

How are you going to organize them? The process? And what kind of tools and folks are you going to use to be able to kind of guide yourself through it?

So, hmm, planning, I would say upfront that process pieces are really good piece. So, I just want to add that in here. That’s great. Thanks a lot, Scott, I appreciate it.

But this morning, Siri said, Don’t call me Shirley because I had accidentally left my phone in airplane mode.

I think you’ve got to be a certain age to appreciate that one.

So, another thing about resourcing that’s germane to migrations is that the resources are highly specialized things. We call unicorns. Oftentimes that are, critical during the migration.

You literally don’t need them post migration, so they’re very important.

We’re very fond of construction metaphors around here, almost as much as we’re fond of, bad jokes, is the moving of a house. And like this image, it just bends my mind that they took a big stone house like this and they can literally move it to a other site.

You need to have a very clear set of steps and you have to have a plan.

If you just call a bunch of buddies over with a U-Haul truck your house is going to break.

This requires jacking up the house off the foundation, detaching it, removing the electrical, removing the plumbing, getting all the permits, clearing the streets, and then specialized equipment like this truck that exists solely for moving a house of this size.

And so somebody had to go and invest in that and they’re obviously going to move a bunch of houses and buildings using that truck.

But as a homeowner, you’re not going to go build that truck. You can’t move the house without it.

But once you’re done with the move, what do you do with the truck? Keep it in your backyard? It’s useless at that point. You don’t need it. It’s expensive and it’s just overhead.

So the point is your resources are highly specialized for migration. You need those people that understand the source and the target tool.

So you have to ask yourself, do you use those resources internally or do you partner with somebody?

And what we see is that organizations tend to grossly underestimate the amount of time needed, and assume employees can help on top of their regular jobs right on top of their day job.

What we learned was that getting users time was the biggest hurdle, and it would be weeks in-between meetings, and they didn’t want or have time to learn the new tools. So, think about that.

That’s really important if you’re moving to a new tool, and you think that there’s going to be that your staff is going to help you do it on top of their regular jobs, and that they’re going to learn this new tool and adopt it.

And, they just don’t have the time. They don’t want to think about the damage that can do and how that lends itself to, you know, turnover, frustration, stress, lack of adoption.

All sorts of things, so, really understanding the resourcing and not just a thing that you can just have your internal people do.

This is really important.

It also underscores the importance of training, an executive involvement, a top-down mandate, support for BI migrations, right?

The culture in an organization comes from the top, so if it’s just a technical move, you might be in a little bit of trouble, the phasing and the timing, so are you going to do the thing? Whole harder?

You’re going to break it down into chunks and you’ll have a better idea of this, than the correct assessment, and if it’s phased, how are you going to face it? Probably by priority drawn from that assessment stage.

But it might be somewhere else and here you’re going to get a lot more precise. Maybe even with a project plan and timeline, right. You’ll have that Gantt chart that everybody ultimately ends up with.

Then, you can really get down into your specific costs, broken down by the right resource, and the phase. And if you’ve done that you’ve got it from a comprehensive analysis of the system data and a good assessment, you can have a lot more confidence in those numbers.

So then we get to the last stage, the optimize phase.

Three steps to optimizing, and this should be a fun part. This is the spring cleaning part and this is where you really get to think about how you’re going to leverage that cool new technology, that shiny new technology. There’s the eliminate, the combined and the streamline. And these are fairly self explanatory.

When you’re eliminating, you’re getting rid of those unused reports or archiving them right or underused reports, ones that are infrequently used or redundant. We see compression ratios of 5 to 1, 10 to 1. Maybe even more, 50% of those reports are redundant.

You can really shrink that footprint down and make that thing a lot nicer to clean it up the garage, have a garage sale, drop a bunch of stuff off at Goodwill and you can actually walk from the front to the back and find things. It’s amazing.

Alright, next is combining.

This is a fun one too, combining those static, all reports and turning them into dynamic reports, leveraging the capabilities of the new tool that can be prompted or visual.

This is a little trickier because this requires refactoring and redesign which necessitates reviewing business requirements, understanding the capabilities.

Then finally, designing that metadata and content for self-service. This is where you’re going to streamline that.

For self-service, you might trim down that bloated model to cut out all those pieces that you’re not using or underused. You may compartmentalize them better.

So they’re more consumable, and allow people to explore and ask multiple business questions of the data.

That’s a really important thing. Whenever you’re designing a dashboard, you should be able to, not just say, there’s a red light blinking over here. Here’s what’s driving it, get down to the granular level of the data. Who in finance doesn’t want to say I’ve got a negative profit over here. I want to drill down and look at the GL lines to find out that’s happening.

They want to go from that all the way down to the detail, to find out where, when and why.

So, those are all the steps.

Those questions are going to vary and that you might have.

How much of that BI process you’re tackling and or which use case or use cases you’re tackling.

It’s worth coming back to automation, which can really help and it shouldn’t surprise any of you that we’ve designed tools that can really help with that.

The first and foremost is the Migration Assistant. We actually have a webinar separate on that. I’m not going to go into it too much but it marries inventory. An audit of the Cognos system collects very granular report and model data.

It does less complexity scores that help you bucket those reports and help you really do that assessment and prioritization.

And it gets rid of that intermediate Excel up to save time, effort, and money, and really help you. I’m going to do that step instead of skipping it.

And then the Senturus Analytics Connector is something when you talk about that coexistence and minimizing the risk.

And you’re kind of that in-between, where business intelligence is mission critical And you going to keep that going. Our Analytics Connector lets you migrate on a timeline that’s best for your business or if you want to embrace bimodal BI, leverage Power BI or Tableau while accessing your Cognos metadata.

And we have that Migration Assistant webinar, in which we dive into those details.

But with that automotive parts distributor, they were able to automate that collection of detailed information, including things like dimensionally modeled, Relational Models.

And that’s really going to help with their migration.

And that major bank was able to automate that collection of metadata and target those specific functional differences, and streamline their reporting, and really get their heads around that, come in on time on budget, and be happy with the project.

And then, finally, it’s worth sort of mentioning, of course, we’ve focused on this top circle here.

But as usual, process is really, really important. We’ve hopefully driven that point home here today, and given you some tools and questions you can ask. But the people are really important.

As I mentioned, the business sponsors and getting the right resources, and the technology.

People tend to buy the technology, then, they throw people at it and, and hope for the best.

Alright, so, the nexus of those three is where you’ll find success, and the business requirements should drive the technology and the implementation.

So, in summary, dress for success, right migrations are big, complex, risky, mission critical, scary.

So leverage a proven methodology.

The better that the upfront effort to assess and create that roadmap and optimize the migration, the greater your chances for success are.

And tools, like what we just talked about, can really help better assess what you have, and therefore what to prioritize, and how to streamline and optimize that downstream.

So a proven methodology, the right tools and the right partner can reduce risk, cost, and time, and ensure your migration comes in on time and on budget, with higher user adoption rates, self-service, and secure, and governed data.

If you have more questions, want some more Dad jokes, Scott has got a few up his sleeves.

You can follow up with Mr. Scott Felten with a 15 minute meeting. You can click that link there.

This deck is up on, speaking of which, you can go to and find hundreds of free resources on our snazzy new redesigned website at

We’ve been committed to sharing our BI expertise for well over a decade now.

And that’s a great place to go and look for fantastic BI information.

Speaking of fantastic migration information, we have these migration case studies that I talked about and you can drill into those in more detail.

Using the links here, where we talk about the three major clients that have done migrations with us.

We have two interesting up coming webinars: Tips for installing Cognos Analytics: 11 to the latest release.

That’s coming up in a few weeks at the usual day time and you can register for it over on our website under the events page, And then we have a really interesting new format, where Patrick Power, one of our trainers and consultant will present.

We’re doing a YouTube live stream, so you’ll get to see Patrick Live and it’s called, Why Bother With Data Architecture?

So, a couple of quick slides here before we get to any the Q&A about Senturus. So, stick around. And I’ll give you another dad joke before I get there.

The difference between a numerator and denominator is a short line.

Only a fraction of you will understand this.

We concentrate our expertise on business intelligence with a depth of knowledge across the entire BI stack. We focus solely on BI.

And our clients know us for providing clarity from the chaos of complex business requirements, disparate data sources, and constantly moving targets.

We made a name for ourselves because of our strength at bridging the gap between IT and business users.

And we deliver solutions that give you access to reliable analysis, reading data across the organization, so you can quickly and easily get answers to the point of impact, in the form of the decisions you make, and the actions you take.

Our consultants are leading experts.

So a lot of what you heard, the case studies that I was talking about involved our consultants that really understood multiples of these tools and have done this.

They understand the upstream and downstream, the whole process, and they’ve done that in the real world. They’re leading experts in the field of analytics with years of pragmatic real-world experience.

An experience advancing the state-of-the-art.

And we’re so confident in the Senturus ‘s team and our methodology that we back our projects with 100% back money back guarantee that is unique in the industry.

We offer training in the three major platforms that we cover Power BI, IBM Cognos and Tableau.

We offer all the different modalities that you might like tailored group sessions, in-person, small group mentoring, instructor led online, and then self paced e-learning. We can mix and match that to meet your needs.

And most of our instructors know multiples of these tools, if not all three of those tools, and then others that aren’t even listed here.

So they are those unicorns that you can really bring in for those migration projects that can really help reduce the friction and all the risks and whatnot.

We’ve been doing this for awhile to over two decades, exclusively on business intelligence and we’ll work across the spectrum from Fortune 500 down to the mid-market, you probably recognize most if not all of these logos.

And we’ve solved business problems across just about every industry and a lot of functional areas, including the Office of Finance, Sales and Marketing, Manufacturing, Operations, HR, and IT.

Our team is large enough to meet all of your business analytics needs, yet small enough to provide personalized attention.

If you think you are interested in joining the Senturus team, we are looking for several different positions there. You can see those listed there.

You can find the job descriptions on the Senturus website or e-mail your resume to [email protected] if you are interested.

So now we’ll get to the Q&A. But before that one more joke, I used to run a dating service for chickens, but I was struggling to make ends meet.

And somebody was asking, so we will lift and shift work from, yes. All the jokes are clean, so you can, you can tell them, I did that purposely.

There were some, kind of warped ones when to stay away from this will lift and shift work from migrating Cognos from on premise to Cognos on IBM Cloud and so, the IBM Cognos on Cloud is the only SASS version that gives you access to tools like Framework Manager.

Is there any software recommended to distribute shared dashboards from multiple tools in one single place?

There’s one tool that I’m aware of and it’s called Thea, THEIA And it is designed to do exactly that. Oh, there’s also Elation.

That’s a Bay Area Company that has an a nice tool for surfacing data assets across different BI tools.

Son tells his father. I have an imaginary girlfriend.

The father sighs and says: “You know, you could do better.

Son: “Thanks Dad!”

Father: “I was talking to your girlfriend.”.

Alright. With that, I want to thank all of you for taking time out of your busy schedules to listen to this. Hopefully, you learned a lot from it.

And, again, if there’s anything we can help you with, business analytics wise, migration wise, training wise, whatever it is, Scott would love to hear from you. And, thanks for your time today.

Connect with Senturus

Sign up to be notified about our upcoming events

Back to top