Documentum High Volume Server (HVS) is a new product designed to cut database space usage in Documentum 6.5 by a third or even up to one half depending on the type of content. Given the significantly reduced database size, overall performance should increase. This year TSG evaluated HVS for a client as part of a Documentum Upgrade. (See other thoughts in our Documentum Upgrade Planning Guide )
HVS – When to use it
Basically, HVS was developed to efficiently store non-changing static or immutable content and meta-data. A good example is scanning/imaging but COLD and other content/meta-data that will never change makes sense as well. Content stored using HVS should not need to be versioned, rendered, annotated or changed. Otherwise, HVS converts the object from a light weight object back to a normal Documentum object and the benefits of HVS are lost. Examples of content that are ideal for HVS include reports, invoices, check images, documents archived for historic purposes and reference, and emails.
HVS – How it works
HVS reduces the size of the database by sharing security and common meta-data amongst a set of lightweight objects. HVS can also partition the database to increase the rate content can be stored and retrieved. There are some limitations placed on the content to achieve these benefits. First, security is applied broadly to a lightweight object type. This results in all documents of a lightweight type being available to all users that can access the type even though a user may only need access to a portion of the documents. In other words, HVS cannot support the normal object-level ACL security and accordingly security may need to be built into the application layer. The other limitation, as already mentioned, is that documents cannot be versioned or changed.
If you need to make large volumes of content available in near real time, the rapid ingestion feature of HVS may be of interest. Using special HVS DFC functions, applications can load raw database tables that contain the meta-data information for your lightweight object types. This is very different than typical DFC applications that work strictly through the Documentum object layer. To use rapid ingestion, a custom program is necessary (Documentum does not have any tools that currently support this, including Captiva), the DBA will also need to partition the database tables. The partitioning allows the data to be loaded into “offline” Documentum tables. The tables are then swapped with empty place holder tables making the newly documents available while the Content Server stays up and running.
With a partitioned database, other new tricks are available in the HVS DFC to scope searches to particular database partitions. This can be handy if the system is very large and the user community is having unacceptable metadata search performance times.
WHERE TO GO NEXT
When considering HVS – users should keep in mind specific points
- Cost of HVS (will vary by installation)
- Performance Benefits versus normal database tuning
- Ingestion program development as this would be custom HVS DFC calls
In relation to the ingestion process, TSG has added support to HVS in OpenMigrate to help clients ingest new content as well move existing content to HVS. One benefit of this approach is that one tool can be used for ongoing ingestion of new content while also being able to support movement of existing content within the docbase (ex: archived items).
With our client, the proof of concept went well but the client didn’t quite realize that HVS required additional cost and licensing. In evaluating the benefits versus the cost, the database and Documentum support requirements did not outweigh the benefits and the client did not move forward with HVS.
[…] in 6.5, understanding the impact of WDK development, migration, clone or in-place upgrade, high volume server, common upgrade questions, and upgrading your application now to make upgrading Documentum easier […]