One of the more interesting achievements of the TSG 11 Billion Document benchmark was our ability to quickly load a large repository and test performance for search and retrieval performance as well as concurrent usage. For those clients considering developing a large repository of documents, TSG now has both the experience as well as additional tools to help conduct a significant volume test at scale in a very short timeframe. This post will describe how our tools can be leveraged for large repository testing for DynamoDB, Alfresco, Documentum, Hadoop or other repositories.
Volume Testing – What are the issues?
Volume testing ECM solutions is difficult and often not worth the money and effort spent. Often times a volume test is difficult and expensive to set up, not representative of the actual production usage and can delay a project. In working with clients on testing, we often identify three types of issues that volume and performance testing or other types of testing can reveal.
- Type 1 Issue – this is an issue that the test identifies that would have affected production and can be fixed and resolved before production. This is the type of issue a volume or performance test was designed to catch.
- Type 2 Issue – this is an issue that the test identifies that would not have affected production but the team spends effort and time fixing.
- Type 3 Issue – this is an issue that the test does not identify but will affect production. The team will have to address this issue quickly in production and will need the resources available to limit production issues and perceptions.
No matter how much testing is conducted, Type 3 issues will always exist. For an example of these types of issue, see our issue from one large client with Hazelcast and Alfresco where the issue was almost identified as a Type 1 Issue in testing but wasn’t fully revealed until production making it a Type 3 Issue as users came on to the system.
Regardless of the testing, we have found that all types of issues arise for ECM solutions as the production usage is so difficult to accurately predict and replicate in the testing environment. TSG advises clients prepare for Type 1, 2 and 3 type issues for any large production deployment.
Other issues with Volume testing large repositories (100’s of millions or billions) include:
- Finding Representative Data – Often the large repository will be loaded from a legacy ECM system (FileNet, ImagePlus…..). The actual migration of the data might take considerable time given mapping, volume, retrieval, clean-up and other migration activities. Production data can also be highly confidential and require special security not consistent with building out a quick performance test.
- Production Environment Availability – Ingesting 100’s of millions or even billions of documents is time consuming and would typically require a large production environment.
- Loading the Documents – Typical migration or bulk ingestion jobs require the jobs to actually load the documents. Leveraging ECM APIs can be slow particularly when moving documents from current location, through application server to the eventual storage.
Given the above, TSG recommends conducting quick and efficient testing where possible and is looking to provide tools and services to assist clients in this endeavor.
TSG Benchmark Test Harness
With the DynamoDB Benchmark, TSG has developed a Test Harness with AWS and the TSG products (OpenMigrate and OpenContent) to allow clients to spin up and load large volume scenarios on AWS very quickly to volume and performance test their solution. Components of the test harness include:
- Sample Data – TSG has curated 11 Billion unique addresses that we can use to load representative document models. The test data can be manipulated to populate document fields and keep the values unique without exposing production client values.
- Loading of Data – TSG leveraged OpenMigrate to load both DynamoDB and Elasticsearch in AWS. TSG could also leverage these approaches for Hadoop, Alfresco or Documentum.
- Linking of Documents – For all clients, the performance test is focused on testing the meta-data repository for search and retrieval and not the actual retrieval of a document. The test harness can link to content without the delay of ingesting the content through the API.
- Concurrent User Testing – TSG built jmeter test plans running them concurrently on AWS EC2 instances. Each test plan was built to replicate users performing case management actions against the claim data sets loaded in the 11 Billion benchmark.
- Amazon Web Services – TSG’s partnership with AWS smooths the way to simulate massive scale for clients without having to procure production on-premise or within their cloud environments. For our benchmark, we were able to procure a 96 CPU environment that could process 20,000 documents/second.
Leveraging our experience and the Test Harness, TSG can simulate production volumes and retrieval patterns with AWS quickly and safely without delaying the main development and migration activities.
Volume testing large ECM repositories can be difficult, time consuming and expensive and often not worth the effort as simulating production usage of ECM doesn’t always catch issues that arise based on actual production usage patterns. For clients that are looking to quickly simulate a production environment, TSG can leverage our experience and tools from our 11 Billion Documentum benchmark to quickly simulate large volumes on AWS to avoid the delay and costs of a typical on-premise volume test.
Leave a Reply