TSG is predicting upcoming disruptions to content capture within the ECM industry. We have been working hard this quarter to improve metadata extraction capabilities within the OpenContent Management Suite with machine learning. For this post, we want to discuss and demonstrate the interface that controls capture templates as well as how users interact with the capture process and “teach” the system.
Moving from Capture 1.0 to 2.0
When looking at existing legacy Capture tools that have been around for a long time, there are two primary approaches that a Capture 1.0 tool will leverage to automatically capturing data in a document as it’s processed:
- Target a Specific Location or Zone – using this approach, the administrator defines a zone on the document to denote where a piece of data resides. For example, the tool could be told to look in a given box in the top right corner of the header to pull the “Report Number” value. This approach only works well when the positional data is known and very consistent across all documents. This was common with many early image scanning and capture vendors.
- Look for a Key/Value Pair – using this this approach, instead of defining the zonal position of the data, the tool is told to look for a given key, for example: “Invoice Number”, and then the tool will look at surrounding text to pull the value – for example, preferring text to the left or underneath the key. This approach works well when the target data may be anywhere within the document, but runs into problems when the key text is inconsistent. Using our invoice example, some vendors may display Invoice Number as Invoice Num, Invoice Nbr, Invoice #, etc. Existing Capture tools have approaches for minimizing this problem, but it is still an issue for many clients.
To date, while the above approaches can be successful, clients have struggled when documents change over time. To use invoices as an example, if a new vendor sends in an invoice that has “Invoice Number” listed with an unexpected key, the system will not correctly pick up the value. When the user corrects the system in the indexing screen, the exact same issue will arise for the next invoice that comes in from this vendor until an administrator updates the template. While this may not sound like a big deal, some of our clients have invoices coming in from over 30,000 vendors. This can become a maintenance nightmare as these templates do not automatically improve over time.
And that’s exactly what Capture 2.0 tools will do – learn over time. When the user corrects the Invoice Number value, the tool should use that correction to get it right the next time.
Capture 2.0 Machine Learning
Previous Capture 2.0 posts on this blog have referred to the following diagram:

The post linked above has more detailed information for all of these steps, but in this post we are going to look step 1 as well as 5-7.
Create the Template
In the first step, we need to create a template to set a baseline of what we would like to capture. For example, for invoices we may say that we want to capture Invoice Number, Amount, Due Date, etc. based on the vendor “fingerprint”. Check out the following video to see how this is done.
Index Documents and Teach the System
Once we have a template in place, it’s now ready to use in the OCMS indexer. The following video shows an example of two vendors. One that has been seen many times in the past where the suggestion engine has already been trained, and a second vendor that is brand new.
As you can see, the ability of the OCMS Indexer to “learn” from the user’s interaction with the vendor invoice and improve over time without a template update by the administrator is the key to a Capture 2.0 system.
Let us know your thoughts below:
[…] to augment people when it comes to indexing of certain content and have posted our thoughts on Capture 2.0. After the capture of content, it does get tough to justify the replacement of people for all […]