Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

changing disk arrays for db's but keeping same drive letters! Expand / Collapse
Author
Message
Posted Wednesday, July 31, 2013 5:08 PM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Yesterday @ 4:02 PM
Points: 94, Visits: 483
We run in Amazon EC2 but I am sure the principal is the same.

Currently the system db's are on C:\ , Logs D:\ , User Data E:\ and TEMPDB F:\.
SQL is installed in the default C:\ location.

We need to move Logs and Data onto new RAID arrays but keep the same drive letters (so SQL can restart!!)

So the plan is:
Two new RAID arrays are going to be built and named X: and Q:
SQL Service is stopped via SQL Configuration Manager

All data on D: is copied to X:
All data on E: is copied to Q:

The original D: will be changed to S:
The original E: will be changed to T:

X: will be changed to D:
Q: will be changed to E:

Restart SQL from SQL Configuration Manager

We will be trying to do a test if we can but thought I would see if anybody see any issues?

I can't see anything that needs changing as long as the right files are copied to the right drives and the letters are changed before SQL is restarted.

The only thing I can think of is if SQL takes some ID from the underlying disks / volumes etc and recognised the new D:\ is different and spits the dummy!!!!

thanks



Post #1479743
Posted Thursday, August 1, 2013 7:18 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Monday, June 16, 2014 8:52 AM
Points: 51, Visits: 398
Don't forget to get the security settings right for the datafolder where the data/tlog files reside.
If you just do a copy, no security settings are moved. So have a look att the security settings on the source folder and make sure that the destination folder has the same security settings set up.
Post #1479935
Posted Thursday, August 1, 2013 7:36 AM
SSC Eights!

SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!SSC Eights!

Group: General Forum Members
Last Login: Yesterday @ 7:19 AM
Points: 811, Visits: 2,416
Hi,
I've done it before several times the way you described and not had any problems yet
Post #1479946
Posted Monday, August 5, 2013 6:19 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Wednesday, July 16, 2014 2:50 AM
Points: 2,854, Visits: 3,174
This should work. We have had to do the same process to increase the size of some of our AWS EBS disk sets.

Also, for AWS we have decided it is easiest to manage our EBS volumes if we do not stripe a Windows volume over multiple EBS disks, but instead have a 1 to 1 mapping between Windows volumes and EBS disks. Where we need more IO than EBS provides by default, we use Provisioned IOPS.

This approach has the advantage that if we need to increase volume size we can use an EBS Snapshot to copy the data and folder structure to a new EBS volume, which is a lot easier than rebuilding a Windows stripe set. Also, if we over-provision or under-provision the IOPS it is a lot easier to change when doing things this way.

The only downside so far is this gives a 1TB limit on volume size, which has meant in a few cases we had to relocate the data files of a large database to a new drive letter. However, the simplification of management from using a 1 to 1 mapping made the effort of relocating the database worth while.


Original author: SQL Server FineBuild 1-click install and best practice configuration of SQL Server 2014, 2012, 2008 R2, 2008 and 2005. 29 May 2014: now over 29,000 downloads.
Disclaimer: All information provided is a personal opinion that may not match reality.
Concept: "Pizza Apartheid" - the discrimination that separates those who earn enough in one day to buy a pizza if they want one, from those who can not.
Post #1480885
Posted Wednesday, August 7, 2013 9:16 PM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Yesterday @ 4:02 PM
Points: 94, Visits: 483
The change was done as described worked without any issues.

EdVassie -- as you also run some DB's in EC2 hopefully you will not mind if I ask you a question?

How do you back up data in EC2?
One of our servers creates about 130GB a day in backups.

We plan to backup locally then move them after the backups to S3 (keeping n days on an attached drive , and x days on S3)- with n & x being whatever the business decides), then later on to Glacier (Glacier is not yet in Sydney EC2).

And this does not even take into account the Log Backups.
We do full backups and I don't really want to go to differentials unless I have to.

The issue is the time to move the files, zip if required, the resource overhead to move and zip and the cost of the space to store n & x days of backups

So do you use normally use MSSQL backups or do you use a 3rd party tool to backup and compress etc. like CloudBerry Backup for SQL Server for example.
Do you compress?

Of course your answer may be relative to the database sizes you have in Amazon but it would be interesting to know.

thanks



Post #1482118
Posted Monday, August 12, 2013 2:39 AM
SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Wednesday, July 16, 2014 2:50 AM
Points: 2,854, Visits: 3,174
We use Amanda Enterprise to back up directly to S3 (no intermediate staging).

This takes our standard SQL backups and all the non-DB files that need backup and puts them all on to S3.

We were one of the early customers needing to back up large volumes to S3 in a Windows environment, and it took a while to get the throughput we needed, but once that was passed Amanda has been stable and reliable (and MUCH cheaper than the usual suspects for enterprise backup). It should easily cope with the data volumes you mention, we have more than that.

Be careful about using Glacier. The cost algorithm for data retrieval IMHO does not match the Amazon promise of 'pay for what you use', and can give very high costs. If you need to restore a 2TB file and ask for it in 50GB chunks over 2 days you can get the restore done for free, but if you ask for the whole 2TB in one go it can cost thousands. We are waiting for a more friendly restore cost algorithm before using Glacier again.



Original author: SQL Server FineBuild 1-click install and best practice configuration of SQL Server 2014, 2012, 2008 R2, 2008 and 2005. 29 May 2014: now over 29,000 downloads.
Disclaimer: All information provided is a personal opinion that may not match reality.
Concept: "Pizza Apartheid" - the discrimination that separates those who earn enough in one day to buy a pizza if they want one, from those who can not.
Post #1483183
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse