Earlier in the Capture 2.0 series, we discussed how modern capture solutions would improve the metadata extraction process when processing documents. Capture 2.0 solutions will expand on existing zonal as well as key/value pair methods for extracting metadata by incorporating machine learning algorithms to improve accuracy over time. This post will explore how users can interact with the location data, even after the capture process completes.
In previous posts, we’ve talked about how the system can automatically extract metadata values and incorporate Machine Learning to enable a feedback loop so that user corrections to the metadata cause the overall extraction process to improve over time. Here’s what the process looks like at a high level:
A number of steps extract metadata from the document:
- Steps 2 and/or 5 – As the document is ingested the process will automatically call out to a suggestion engine to extract metadata values.
- Step 7 – If extraction is incorrect or incomplete, the user’s corrections are fed back into the suggestion engine.
In either case above, when the metadata value is extracted, the location data can be saved as well for later use. Specifically, when viewing the document in OCMS, we can tie metadata values to the location data visually. This allows users to easily see where metadata values were extracted from within the document. See the screencam below to see this functionality in action.
As described in this post, location data can be useful to save for later user analysis within OCMS. Additionally, we could see the possibility to allow users to update the location data from the properties screen. This would allow for an additional point within the system to feed error corrections back to the suggestion engine after the capture process is complete. Let us know your thoughts below.
[…] metadata locations can be visualized on the […]