SQL Server Cloud Migration Part I

  • Edward.Polley 76944

    SSC Eights!

    Points: 834

    Comments posted to this topic are about the item SQL Server Cloud Migration Part I

  • Andy Warren

    SSC Guru

    Points: 119676

    Edward, I'm not sure I follow on the velocity calculation - it looks like you're only measuring new rows and not changes to existing rows?

  • TGwinn

    SSC Veteran

    Points: 250

    Andy Warren (9/23/2013)


    Edward, I'm not sure I follow on the velocity calculation - it looks like you're only measuring new rows and not changes to existing rows?

    I was wondering the same thing.

  • Edward.Polley 76944

    SSC Eights!

    Points: 834

    Hi Andy:

    Yes - ours is a high insert OLTP environment, percentage of updates on our systems is low so for this migration it didn't matter. But tracking updates is a good point, if I had to do that I would examine the 'last_update' column on critical tables, failing that I'd look at sampling traces.

  • JayK

    SSCrazy

    Points: 2679

    Hey Edward,

    I am currently upgrading our SQL Server 2008 R2 environment to SQL Server 2012 on all new Windows 2012 servers so found this very interesting - sounds like a very well managed migration.

    Can I just ask if you weren't keeping track of updates, how did you mitigate against lost data? Even if the amount of updates are small in our DB's if you were only syncing new records how did you get by without holding the history of the updates?

    Cheers,

    JayK

  • Edward.Polley 76944

    SSC Eights!

    Points: 834

    In the process of analyzing the data I focused on inserts/deletes as that is how our system are designed, in the actual migration on the critical high volume DBs I synched all data using full backup/restore + Tlogs. These critical DBs are where the business makes money so couldn't lose any data. On the less critical or lower volume tables I used a one time backup/restore weeks in advance of the migration followed by redgate data compare scripts - DC allows you to customize what data to synchronize. These scripts are the key as they allow you to setup jobs to synch hundreds of tables. No data was lost other than several tlogs failed to restore at 2 am.:-)

    Note that tables without primary keys or unique indexes need a bit of testing in Redgate to ensure they are coded correctly.

    I spent a lot of time analyzing our data insert/update/delete/select pattern before deciding updates weren't a significant part of our production process. Your site might be very different.

    Recommend you plan for missing data and let management know what to expect - it took about a week or so to clean up security, indexes and some less critical reporting tables - but the business was up in and making money in just a few hours and that is what I was after.

  • JayK

    SSCrazy

    Points: 2679

    Hey Ed - Ok - that makes sense - thanks for sharing your experience with us - definite food for thought for my own work:-)

Viewing 7 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply