Migrate cluster to new SAN

  • I need to migrate my SQL Server 2000 SP4 cluster to new a new SAN. I will be able to connect to the old and new SAN at the same time. I have 4 drive letters currently mapped to the existing SAN. E,F,G, and Q my quorum drive.

    My plan is as follow:

    1) Take the SQL server clustered services offline

    2) Present new storage to the cluster and add new drives H, I, J and R

    3) Change the cluster drive from Q to R using Cluster Admin, right click on cluster name go to Quorum and change to R

    4) Copy the data over to the new drives; Migrate my databases to new drive letters http://support.microsoft.com/kb/224071

    5) Remove the old drive letters using the cluster admin

    6) Bring my services online

    Does this seem reasonable? For step 4 should I alternatively try to rename drives H, I and J to E, F, and G?

    Thanks to anyone who can offer suggestions.

  • My previous experience with such things is I make the SAN folks handle this. They usually have tools to copy LUNS. Even across vendors.. I would head down this path first. Given that path:

    1. bring the resource group offline

    2. remove the drives from the group

    3. unmap them

    4. copy luns

    5. map new lun

    6. add them to the resource group

    7. bring the resource group online.

    Worst case scenario, remap the old ones and re-add to resource group.

    CEWII

  • The original drives were overprovisioned. The SAN tools can migrate the LUNS but they would remain the same size. We want the new drives to be smaller. Now I am wondering if its worth it to save 100GB.

  • Fair enough.. If it is purely SQL data files it shouldn't be hard to do and get the permissions right..

    If anything have them handle the Quorum remap..

    CEWII

  • Chrissy321 (2/3/2011)


    The original drives were overprovisioned. The SAN tools can migrate the LUNS but they would remain the same size. We want the new drives to be smaller. Now I am wondering if its worth it to save 100GB.

    I would take this opportunity to make sure the LUNs are created correctly (e.g. file allocation units are correct, partition alignment, etc...). I would also evaluate the current configuration to see if that is the most optimal configuration for the system.

    On my clusters, I normally create 7 separate LUNs for a cluster:

    SystemDB (contains system databases, error logs, etc... and very small - 5GB)

    Data Files (contains all mdf/ndf files for user databases)

    Log Files (contains all ldf files for user databases)

    TempDB (dedicated LUN for tempdb database)

    Backups (database backups)

    MSDTC

    Quorom

    With this configuration - if I find that a single database is causing issues with IO performance, I then move it off to it's own pair of drives. For my systems, this is not an issue because normally there is only a single database that is high impact and all other databases are minimal usage or lookups.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Update. We ended up using the Cluster Server Recovery Utility not the plan I outlined at the beginning of the thread. Much simpler.

    http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2BE7EBF0-A408-4232-9353-64AAFD65306D&displaylang=en

    http://blogs.technet.com/b/askcore/archive/2007/11/12/so-what-does-cluster-recovery-actually-recover-anyway.aspx

    The help file associated with the utility provided the most detail on steps needed to be taken.

    The first option would still be to use your vendors migrations tools. We couldn't because our target LUNS were smaller than the source LUNS which lead us to this utility.

    The utility won't copy data. We used NT backup to do this to insure the appropriate ACLs/ permissions were carried over.

    Thanks all.

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply