changing disk arrays for db's but keeping same drive letters!

  • We run in Amazon EC2 but I am sure the principal is the same.

    Currently the system db's are on C:\ , Logs D:\ , User Data E:\ and TEMPDB F:\.

    SQL is installed in the default C:\ location.

    We need to move Logs and Data onto new RAID arrays but keep the same drive letters (so SQL can restart!!)

    So the plan is:

    Two new RAID arrays are going to be built and named X: and Q:

    SQL Service is stopped via SQL Configuration Manager

    All data on D: is copied to X:

    All data on E: is copied to Q:

    The original D: will be changed to S:

    The original E: will be changed to T:

    X: will be changed to D:

    Q: will be changed to E:

    Restart SQL from SQL Configuration Manager

    We will be trying to do a test if we can but thought I would see if anybody see any issues?

    I can't see anything that needs changing as long as the right files are copied to the right drives and the letters are changed before SQL is restarted.

    The only thing I can think of is if SQL takes some ID from the underlying disks / volumes etc and recognised the new D:\ is different and spits the dummy!!!!

    thanks

  • Don't forget to get the security settings right for the datafolder where the data/tlog files reside.

    If you just do a copy, no security settings are moved. So have a look att the security settings on the source folder and make sure that the destination folder has the same security settings set up.

  • Hi,

    I've done it before several times the way you described and not had any problems yet

  • This should work. We have had to do the same process to increase the size of some of our AWS EBS disk sets.

    Also, for AWS we have decided it is easiest to manage our EBS volumes if we do not stripe a Windows volume over multiple EBS disks, but instead have a 1 to 1 mapping between Windows volumes and EBS disks. Where we need more IO than EBS provides by default, we use Provisioned IOPS.

    This approach has the advantage that if we need to increase volume size we can use an EBS Snapshot to copy the data and folder structure to a new EBS volume, which is a lot easier than rebuilding a Windows stripe set. Also, if we over-provision or under-provision the IOPS it is a lot easier to change when doing things this way.

    The only downside so far is this gives a 1TB limit on volume size, which has meant in a few cases we had to relocate the data files of a large database to a new drive letter. However, the simplification of management from using a 1 to 1 mapping made the effort of relocating the database worth while.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • The change was done as described worked without any issues.

    EdVassie -- as you also run some DB's in EC2 hopefully you will not mind if I ask you a question?

    How do you back up data in EC2?

    One of our servers creates about 130GB a day in backups.

    We plan to backup locally then move them after the backups to S3 (keeping n days on an attached drive , and x days on S3)- with n & x being whatever the business decides), then later on to Glacier (Glacier is not yet in Sydney EC2).

    And this does not even take into account the Log Backups.

    We do full backups and I don't really want to go to differentials unless I have to.

    The issue is the time to move the files, zip if required, the resource overhead to move and zip and the cost of the space to store n & x days of backups

    So do you use normally use MSSQL backups or do you use a 3rd party tool to backup and compress etc. like CloudBerry Backup for SQL Server for example.

    Do you compress?

    Of course your answer may be relative to the database sizes you have in Amazon but it would be interesting to know.

    thanks

  • We use Amanda Enterprise to back up directly to S3 (no intermediate staging).

    This takes our standard SQL backups and all the non-DB files that need backup and puts them all on to S3.

    We were one of the early customers needing to back up large volumes to S3 in a Windows environment, and it took a while to get the throughput we needed, but once that was passed Amanda has been stable and reliable (and MUCH cheaper than the usual suspects for enterprise backup). It should easily cope with the data volumes you mention, we have more than that.

    Be careful about using Glacier. The cost algorithm for data retrieval IMHO does not match the Amazon promise of 'pay for what you use', and can give very high costs. If you need to restore a 2TB file and ask for it in 50GB chunks over 2 days you can get the restore done for free, but if you ask for the whole 2TB in one go it can cost thousands. We are waiting for a more friendly restore cost algorithm before using Glacier again.

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply