• Jeff Moden - Wednesday, February 14, 2018 6:43 AM

    aureolin - Friday, February 9, 2018 10:37 AM

    We're in the process of a data migration from one LOB tool to another over this weekend. Part of the migration is moving just under a terabyte's worth of documents from files on disk to blobs in a database (SQL 2014). The destination system is a locally hosted VM with enough disk space on one data "disk drive", but I'm looking for recommendations on how to architect the destination database. Considerations are ease of management, ease of taking backups, etc. FWIW, the individual files are generally small, many are one to two page .PDF files. Yes, I do know this is last minute, but this is my only chance to "get it right" from the start.

    Suggestions, comments, advice all welcome.

    Steve G.
    County of San Luis Obispo

    Once added, do the documents change?  New ones might get added but I'm thinking that existing documents are going to change.   Backing up more than once, stuff that won't ever change, is a bit insane.  All of this can be handled fairly gracefully where daily backups that measure in minutes (I have similar in a Terabyte telephone system where all call recordings are stored in the database) and a "get back in business DR restore" can be accomplished in a similar time.

    Heh... and forget about striped backups... unless you have the right pipe and the right disk configuration, they'll actually slow things down and complicate restores.  There's a much better way if the documents become static once loaded into the system.

    Before we can continue, though, which version and edition of SQL Server do you have?

    I have SQL 2014 Enterprise running the database. Generally, there's a large set of the documents that are not going to change; going forward there may be a subset of the documents that will change - though this remains to be seen.