Measurement: Tracking intelligent automation progress and results against true business outcomes

Pragmatism: A monthly column by Jerry Wagner

Add bookmark
Jerry Wagner
Jerry Wagner
03/21/2019

Measurement, especially as it applies to intelligent automation, is the true business outcomes that we're trying to achieve with automation and being able to track our progress and results toward achieving those business outcomes.

Harmonizing metrics

When we started looking at measurement, the challenge that we had was trying to get to the point of comparing apples to apples on those business outcomes. Use cases in risk have different business outcomes than use cases in our credit card business. I tried to break it down as a measure with our 'save' being our harmonizing metric. We've realized that we need to get to the point where we are comparing apples to apples as much as possible. For each of the individual business use cases, we identify a metric that tracks toward the desired business outcome. We can then track that separately with regards to that particular use case.

Automation Prioritization

When we were looking at that metric to try to align use cases for prioritization, we landed with the RICE concept. I borrowed it from my background in product management and researching different ways to prioritize features and delivery of software. It was a very basic methodology for prioritization and product management that I read about from Sean McBride from a company called Intercom in San Francisco.

I applied that to what we're looking at in automation to try and come up with a formulaic way to prioritize. Once we figured out that metric, we could also use it as a way to compare before and after to see if we realized the benefits that we thought we had when we first came up with that use case.

Value calculation

RICE is an acronym whose first letter stands for Reach. We use Reach as a measure of the more tangible metrics for business values. In that space, it usually lands in either hours given back to the business or cost avoided in terms of dollars– primarily because we can go back and forth between those numbers. If we have hours and a standard cost per unit of work, we can get to dollars and vice versa. If we have, for example, cost avoidance from insourcing, or things of that nature– we can also add that money into our tangible value type calculation.

I stands for Impact. Impact is the place where we put all of our intangible benefits. We think of this as more of a multiplier. Intangible business outcomes can be those things that you really can’t put a dollar value to or you really can’t put a time value to, but you know they are important. If we have a use case that is focused on risk reduction, but we don’t really have a good way to quantify that risk reduction, then we quantify it with one point. If the use case is focused on alignment to our strategic imperatives or has visibility to our senior leadership, that is another point. If it is simplifying or reducing complexity of the profit– that is a point.

Those are things that are hard to quantify, but they're still very important from a prioritization perspective. The more advanced version of this computation would be giving weight to those points based on the criteria that you have within your organization. If you value risk reduction higher than reducing complexities then you can give two points to risk reduction and half a point to reducing complexities, depending on the values of the organization.

That covers R for Reach and I for Impact. Those are items that help define the overall business value or the benefits of the business. Then C is what we call Confidence and that's just a percentage up to a hundred percent that describes how confident we are that we can deliver on the promise for this particular use case. That's where we build in the factors for complexity of the use case.

If we have a use case where we are reusing existing automation code, then confidence is relatively high—close to a hundred percent. But if we're introducing new code connecting to one new system, or we have anywhere from one to five structure decisions that have to be made, or we have an error exception rate of less than ten percent but greater than zero– then maybe our confidence is eighty percent.

Decision with confidence

As you work your way down, you get more systems, more decisions, and/or higher error rates than the manual process. Your confidence may get lower to the point where you're tackling more than five systems and you have expected exception rates that are significantly high because of the structure type decisions that have to be made within the process. In that case your confidence would probably be somewhere between zero to twenty percent. Those are things that would be a lower priority.

E is our measure for Effort. We classify it as the number of sprints or months--depending on how you want to normalize this value--required to actually build out the solution. If you take your Confidence divided by Effort, then that is information that's coming from the automation team. That's helping characterize the complexity of the use case.

Transformation Strategy

When you put it all together, you have Reach times Impact times Confidence divided by Effort. That gives you a normalized number that is your estimated benefit per unit of effort. That's what we use as our way to try and prioritize our different use cases and work in the tangible impacts like hours and dollars saved. We can also compute intangible impacts, such as if it is going to help reduce risk or if it’s something that's contributing to our overall strategic objectives.

The evolution of this process is figuring out how we can reduce the friction to make it a lot easier for people to self-assess. If you think about a traditional program you might have some sort of intake form that would ask several questions and the person filling it out would probably know the answers to about 25 percent of those questions. They really wouldn't know how to answer many of them and it would require a lot of back and forth.

Rule Based

We want to make sure that the rules are clear to the people who are submitting the potential business cases that are being assessed for automation and understand that this is the formula that we're referencing, but also these are the things that are contributing to it. We are trying to answer that in simplistic, layman's terms. Instead of an intake form, we should actually walk them through a web-based wizard where they specify how much time is involved in their current process. Asking these questions in bite-sized chunks is a way that people can perform that self-assessment as they go. Based off of their answers, we would route that to the appropriate team. Try to reduce the friction of the assessment process so that people can assess on their own instead of trying to involve the central team to help answer those questions for them.

Total cost of ownership

We want to make sure that we're tying that to metrics that are being tracked within the business as well as within our automation platforms and making sure that they're aligned. If we say that we think we're going to get a certain number of hours saved once we actually get the automation up and running, are we tracking to that savings on a monthly run rate? And if we see any bumps in the road, what are causing those bumps? Is it that we have a spike in exceptions because of a change in process that wasn't accounted for, or was there a technology issue that didn't get accounted for?

The goal is truly trying to get to the equivalent of a total cost of ownership type of calculation, where you can say, “Oh, for a certain period of time you had a certain benefit from the automation. You invested a certain amount of effort to build and maintain that automation.” Is that total cost of ownership getting better or worse? And if it's getting worse, do we want to invest more time to make it right or do we feel like that automation is obsolete and maybe we have to come up with a different solution?


RECOMMENDED