• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
TSB Alfresco Cobrand White tagline

Technology Services Group

  • Home
  • Products
    • Alfresco Enterprise Viewer
    • OpenContent Search
    • OpenContent Case
    • OpenContent Forms
    • OpenMigrate
    • OpenContent Web Services
    • OpenCapture
    • OpenOverlay
  • Solutions
    • Alfresco Content Accelerator for Claims Management
      • Claims Demo Series
    • Alfresco Content Accelerator for Policy & Procedure Management
      • Compliance Demo Series
    • OpenContent Accounts Payable
    • OpenContent Contract Management
    • OpenContent Batch Records
    • OpenContent Government
    • OpenContent Corporate Forms
    • OpenContent Construction Management
    • OpenContent Digital Archive
    • OpenContent Human Resources
    • OpenContent Patient Records
  • Platforms
    • Alfresco Consulting
      • Alfresco Case Study – Canadian Museum of Human Rights
      • Alfresco Case Study – New York Philharmonic
      • Alfresco Case Study – New York Property Insurance Underwriting Association
      • Alfresco Case Study – American Society for Clinical Pathology
      • Alfresco Case Study – American Association of Insurance Services
      • Alfresco Case Study – United Cerebral Palsy
    • HBase
    • DynamoDB
    • OpenText & Documentum Consulting
      • Upgrades – A Well Documented Approach
      • Life Science Solutions
        • Life Sciences Project Sampling
    • Veeva Consulting
    • Ephesoft
    • Workshare
  • Case Studies
    • White Papers
    • 11 Billion Document Migration
    • Learning Zone
    • Digital Asset Collection – Canadian Museum of Human Rights
    • Digital Archive and Retrieval – ASCP
    • Digital Archives – New York Philharmonic
    • Insurance Claim Processing – New York Property Insurance
    • Policy Forms Management with Machine Learning – AAIS
    • Liferay and Alfresco Portal – United Cerebral Palsy of Greater Chicago
  • About
    • Contact Us
  • Blog

Documentum Upgrade to 6.5 – Client experience with Migration versus a Database Clone and In-Place Upgrade

You are here: Home / Documentum / D6.5 / Documentum Upgrade to 6.5 – Client experience with Migration versus a Database Clone and In-Place Upgrade

October 7, 2009

I had an interesting discussion today with a large hospital client that is currently upgrading to Documentum 6.5.  Like most of our Documentum customers, the client has chosen to refresh hardware, database, and disaster recovery tools as well as upgrade the operating system (from Solaris 9 to Solaris 10).  Initially the client was targeting a database clone approach with an upgrade in place for roughly 1.4 million documents.

As referenced in a previous blog post, the client found the clone approach challenging to say the least – issues they encountered included:

  • People – Client needed 5 people to “figure it out” including a DBA, Unix Admin, Storage Admin, Documentum Administrator and Project Manager.
  • Downtime – Team estimated that “hopefully” the Documentum system would only be down for 2-3 days.
  • Effort – The client thought the 5 person team would take at least 2-3 weeks to “figure it all out” to be able to complete the migration accurately during the 2-3 days of downtime.

While talking to TSG about OpenMigrate, the client decided to try a 2 day proof of concept to use OpenMigrate for a migration approach rather than the clone and in-place upgrade.  The proof of concept was successful in handling 90% of the client’s needs.  The client IT resources were able to successfully address 100% on their own after the POC with minimal phone support from TSG.  Benefits of the migration approach included:

  • Reduced Downtime – because the migration approach allowed for “chunks” of archive documents to be moved without bringing the current system down, downtime was reduced from 2-3 days to 2-3 hours.
  • Reduced Risk – with less downtime plus the ability to check the integrity of the new system with the archive documents the risk of failure was substantially reduced.  Also,  the fallback approach was to leverage the old hardware and migrate any newly scanned content.  With the clone approach, fallback was much more complex and might involve rescanning content.
  • Reduced Development Environment Size – With the clone approach, the client had previously maintained a development environment that was the same size as production.  By using OpenMigrate going forward, the client was able to leverage a significantly smaller development environment and only migrate subsets of the production docbase.

Filed Under: D6.5, Documentum, OpenMigrate, Upgrades

Reader Interactions

Comments

  1. Anonymous says

    May 10, 2010 at 7:42 am

    Hello,
    I used open migrate to migrate 1 million documents and it took me a month or so to migrate documents . I created chunks od data to migrate. Migration was smooth and effective. No issues. But now I have a task at hand and have to migrate over 2 TB of data . I have estimated around 7 months to do it using oprn migrate. Client does not want to go beyond two months. So that made me think an alternative to open migrate that is more faster like cloning the database, copying the file stores and installing contnet server using same config, aek.key etc .

    Reply
    • Todd Pierzina says

      May 11, 2010 at 1:47 pm

      I’m glad to hear you were able to achieve success with OpenMigrate! We’ve found several cases where folks have used OM without our assistance, but yours is by far the largest migration we’ve heard of.

      In our experience though, OpenMigrate itself adds negligible overhead in large migrations. If there’s slow disk, slow network, slow database or slow TBOs/Lifecycle code, any migration product would have the same performance limitations, right? In fact, the multithreading offered by OM can often help offset some infrastructure drawbacks.

      In one of our most recent migration, File System/Database to Documentum running on a very old HP-UX machine, our average performance has been 10 docs per second, or just under 1M documents per day. Assuming an average doc size of 150K, that would be (roughly) just under 2 months of constant ingestion for 2+ TB, assuming a similar throughput. It looks like your throughput was less than ours, leading to the 7 month estimate. Or were there other factors driving out the end date?

      But you raise a good point: With a very large migration like this, the approach you lay out, a basic database clone, is often the preferable option. Not every system migration calls for an actual “migration”; sometimes a clone is the lowest-risk, fastest way to go, especially if you want to simply move everything. TSG has performed a number of these types of migrations. I’m not familiar with any product that performs this type of move; instead it’s a fairly manual and technical process, relying on a DBA to do most of the “heavy lifting”.

      A colleague and I have looked at what it would take to modify OM to do something similar to what you suggest, but possibly even better: migrate a 1-byte file for each document, then “swap in” the appropriate content files via straight filesystem copy. This way we could migrate a slice of the repository, do reorganization and cleanup, etc., very quickly; but instead of streaming content into Documentum (slow), we’d “poke” the content in there post-save. We assume a filesystem-level transfer would be faster than a stream.

      Reply
  2. Anonymous says

    May 12, 2010 at 12:40 pm

    Yes that’s a good idea to save 1 byte files as place holders. This will allow us to migrate chunks of data without worrying about the data size . swap-in will still consume time if it happens on every post-save of a document even if its a file copy; though it will be far better than streaming content. I was thinking of a event based post save i.e. asynchronous. When OM migrates the document it posts an even that has the source file location and the target file location. Then Have a different process pick up that event and migrate the files using simple copy. But if somthing goes wrong with file copy we should have a mechanism to report it back.

    Reply
  3. Paras Jethwani says

    July 10, 2013 at 12:33 am

    Will this swap-in approach work if CSS – content de-duplication is enabled because then even the hash-values will have to be updated to be reflective of the target file that replaces the 1-byte file

    Paras

    Reply
    • chris3192 says

      July 10, 2013 at 7:44 pm

      Hi Paras – We have not tried the swap-in approach in an environment with CSS and de-duplication. We recently did a clone migration upgrade with a client using Centera. Once the system was cloned and upgraded it was attached to the Centera device and the content was accessible. Is the assumption that the content is currently not on Centera and the migration would move it to Centera?

      Reply
      • Paras Jethwani says

        July 11, 2013 at 3:40 am

        Hi Chris,

        I was referring to a scenario where content is stored on NAS/SAN (as opposed to Centera) in the source and target repositories and a 1-byte file approach is used to migrate content from the old repository to a new repository

        If CSS de-duplication is enabled in the target repository – then to swap the 1-byte dummy file with the actual content file – in the background – will also perhaps require the CSS hash value to be updated for that object in the target repository?

        Hope this clarifies my comment and question

        Reply
        • chris3192 says

          July 11, 2013 at 12:15 pm

          Hi Paras – With Content Storage Services (CSS) I’m only familiar with the content migration jobs. I am not familiar with file de-duplication set up at the Content Server level; only at the storage management and hardware level. As such, I’m not comfortable predicting with a high-level of accuracy what would happen with the swap-in. I can imagine that if any de-duplication is in place then the hash values would need to be recalculated when the true file is swapped in like you say. The swap-in also uses the folder path or storage location known to Documentum for the document so if any of that is altered by CSS from what is “normally” found then additional processing would also be necessary to determine where the true content file should be stored.

          Reply

Trackbacks

  1. Documentum – What’s Next Updated for 2010 « TSG Blog says:
    February 23, 2010 at 3:27 pm

    […] upgrade alternatives, extends versus modifies in 6.5, understanding the impact of WDK development, migration, clone or in-place upgrade, high volume server, common upgrade questions, and upgrading your application now to make upgrading […]

    Reply
  2. Documentum Upgrade – Understanding all the Pieces « TSG Blog says:
    March 2, 2010 at 4:15 pm

    […] and more, it is easier and less costly to upgrade the server hardware.  As mentioned in a previous post, upgrading to new servers simplifies the upgrade process as it provides more flexibility in regards […]

    Reply

Leave a Reply Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Primary Sidebar

Search

Related Posts

  • Documentum 6.6 Upgrade – Character Encoding Fail – Part II
  • Documentum 6.5 Upgrade – Character Encoding Issues
  • Documentum Full Text Search with Lucene – Honoring ACL Security
  • Documentum – What’s Next Updated for 2010
  • Documentum Upgrade Alternatives
  • Documentum Search – Lucene, FAST, Verity, Google and upcoming DSS
  • Documentum Upgrade – Inplace or Migration
  • Documentum Upgrade – High Volume Server – A Basic Understanding
  • Documentum Migration – Buldoser and OpenMigrate – Folders versus Nodes
  • Documentum Migration – OpenMigrate for both large migrations and ongoing capture

Recent Posts

  • Alfresco Content Accelerator and Alfresco Enterprise Viewer – Improving User Collaboration Efficiency
  • Alfresco Content Accelerator – Document Notification Distribution Lists
  • Alfresco Webinar – Productivity Anywhere: How modern claim and policy document processing can help the new work-from-home normal succeed
  • Alfresco – Viewing Annotations on Versions
  • Alfresco Content Accelerator – Collaboration Enhancements
stacks-of-paper

11 BILLION DOCUMENT
BENCHMARK
OVERVIEW

Learn how TSG was able to leverage DynamoDB, S3, ElasticSearch & AWS to successfully migrate 11 Billion documents.

Download White Paper

Footer

Search

Contact

22 West Washington St
5th Floor
Chicago, IL 60602

inquiry@tsgrp.com

312.372.7777

Copyright © 2023 · Technology Services Group, Inc. · Log in

This website uses cookies to improve your experience. Please accept this site's cookies, but you can opt-out if you wish. Privacy Policy ACCEPT | Cookie settings
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT