Capturing The Unicorn
Let’s look at a scenario using a real use case. An organization must process manually 30,000 pages per day with 10 data fields on average per page. The human accuracy for manually processing these pages that they've analyzed is about 95%.
This means that there is 5% error, and the automation target (as always) should be at that accuracy level or ideally better. So looking at it from three perspectives we have on the left side, a hundred percent manual operation.
If you calculate it, and you can derive these in terms of how much time is needed to process each individual page and to perform manual data entry on a single field, it will take roughly the equivalent of 21 staff on a daily basis to do these tasks manually.
When we apply the traditional capture, we're talking about a lot of investment in terms of analyzing documents and configuring the system and then testing and tuning. Ultimately, what we get on average is from 60% to 80% task automation. We effectively reduce staff or the equivalent staff required by half, but all the data must still be verified. So we're not completely optimized yet in terms of the amount of tasks that are automated or the amount of accurate tasks that are automated. And still, we also don't know which data is good versus which data needs correction. Staff must still review data and do minor corrections.
When we apply a machine learning to the tasks of both analyzing and curating data and automating the system configuration, we see a significant uplift in terms of automation and efficiency with increased accuracy. We can move the needle effectively to getting about 90% of those tasks automated at 98% to 99% accuracy with the equivalent of two staff to take care of just that amount of information that needs to be reviewed. You can go from a 100% manual process, applying machine learning all along the way to automation that truly achieves high straight-through processing.
We've talked about the power of machine learning, where it fits, and where the promise of it lies within Intelligent Capture. It can take a process performing on average with about 50% efficiency and move it to 90% efficiency or better. However, there's still the factor of measuring and knowing how your system is performing. There are three important aspects of this.
First, there is measuring overall system accuracy. When you're measuring system accuracy, you're measuring all the output. So let's say you've got 10 fields on each page. What you're doing is you're measuring the number of fields that are being output correctly or accurately. This allows you to understand how much data entry you can remove, but it doesn't let you understand how much of that data can flow straight through the system. This leads us to the second measurement, which is measuring the unattended automation rate.
This allows you to look at not only the total output of the system and how much of that output is accurate, but you also measure something called the confidence score. Now with intelligent capture, you often get a confidence score associated with each automated path. So you've got a confidence score for each field that has automated data entry, or you have a score associated with each document class assignment. Every automated task has a corresponding confidence score.
To assess the unattended automation rate, you're looking at accuracy and setting a reliable confidence threshold. This threshold is a reliable breakpoint in those confidence scores that allows you to accept all those answers above the threshold as accurate. You can measure this to a fine degree. Really, the ultimate goal here is to move from an average off intelligent capture implementation to something that's modern, almost completely unattended automation.
The mechanics of integrating Intelligent Capture tools with RPA or other automation technologies depends on many factors including the workflow, the scope of the documents, etc. As discussed, in an automation workflow, RPA is typically involved with initiating the process as well as collecting data from different systems. It might even screen scrape some data off websites. And then, initiate a request for the import of document. Examples of this are a lending scenario or claims adjudication at the point of initiating the request of documents. This request might be a notification via a mobile app to take a photo of the document or through a web browser. This is where Intelligent Capture tools are important so those documents can be uploaded and presented through an API.
All RPA solutions provide some level of API to hand-off documents to other types of systems using Intelligent Capture software that can be configured to meet the requirements of systems. RPA is defined by the process. If you can define the process, you can define the scope of documents; and if you can define the scope of documents, you can define what data needs to be located, extracted and verified from those documents. Intelligent Capture by virtue of being in the middle of that RPA process knows what it's likely to deal with and accomplishes these classic automation tasks such as classification of documents or separating those documents into individual discrete pages.
IC can also do other types of analysis, like locating specific information and reporting back summaries. All this information is presented back to the RPA system through the API (or file-based mechanisms) so that as RPA has its necessary data, it continues along with its workflows. Typically it is a simple handoff. The RPA system handles all the other types of data synthesis and incorporates that within the workflow. It’s relatively seamless and allows for much higher straight through processing.
I learned a lot
It was great, but not what I needed
I found it hard to follow or uninformative