A Look at Database Mirroring

  • You point sounds valid enough. We’re talking about different situation. So, I wouldn’t say clustering is any bad, its just difference in approach for different purpose.

    Just a few more things to understand on your disk mirroring. If I understand correctly, mirroring on the disk level just merely brings the DBs/Logs to a different SAN location. Say if you primary location has an indefinite power outage (or assume its one day), how would you made sure the SQL can be brought up in say, the next hour or so. Do you have another SQL box at your DR location ready and connected to that replicated SAN disks? Do you re-attach the DB? What about the SAN zoning to the disks? Do you already have fibre laid to the SAN at DR site?

    From what I can see, you can replicate an active DB across to another SAN but cant have any SQL connects to the DB. The disk mirroring basically just replicate contents across.

    I appreciate you sharing your experience. It would help me to think of different approaches. I’m very interested to know how you would be able to bring up your DBs in layout you’ve mentioned.

    Cheers,

    Simon

    Simon Liew
    Microsoft Certified Master: SQL Server 2008

  • There are many ways to write the connection string, but below is one example.  In this example 'ServerA' is the principal, 'ServerB' is the mirror, AdventureWorks is the database when using Windows Authentication to connect to the SQL Servers:

    "Data Source=ServerA;Failover Partner=ServerB;Initial Catalog=AdventureWorks;Integrated Security=True;"

    Below are some more helpful links that might answer most of your questions\concerns.

    http://www.connectionstrings.com/?carrier=sqlserver2005

    http://www.microsoft.com/technet/prodtechnol/sql/2005/dbmirror.mspx

    http://msdn2.microsoft.com/en-us/library/ex6y04yf(vs.80).aspx (What's new in ADO.net 2.0)

    http://msdn2.microsoft.com/en-us/library/5h52hef8(VS.80).aspx  (Connection String Info)

    http://msdn2.microsoft.com/en-us/library/system.data.sqlclient.sqlconnection.connectionstring(VS.80).aspx (SqlConnection.ConnectionString Property )

  • I attempted to implement DM on a 3-box workgroup using the GUI wizard, but failed, apparently because the wizard's logic does not work for workgroups. So next I tried to implement Michael Merriil excellent script for non-domain DM. Everything worked fine (endpoints, certificates, etc.) until it came to actually setting up the mirroring. Then it failed again.

    I am wondering if it has to do with the presence of named instances.  For example the primary is called jaguar/sql05dev. The primary box also has another instance jaguar/sqlexpress. The other boxes also have named instances, although only one SQL 2k5 instance exists on each. Anyway, I am suspicious of

    alter database aspnetdb

       set partner = 'TCP://192.168.1.4:5022' as there is no named instance specification.

    How do I specify a named instance with the IP address notation?

    Thanks,

    Jeff

     

  • That's correct at the DR site there is an identical server where all volumes are replicated at the disk level from one SAN to another SAN over two OC3 pipes using SRDF/A. The DR server does not have the disks mounted, they are unmounted and a series EMC commands are run to mount the volumes. The DR SQL Server is configured as follows:

    One time setup -- Use a temporary volume to install SQL Server service using same instance name and path as primary site. Set SQL Server service to manual at DR site, remove temporary volume, setup SRDF/A to replicate all volumes except the OS this includes the system databases (master...). The repliated volumes remain unmounted. In event of disaster or during several annual tests run EMC scripts to mount the volumes, start SQL Server service and swing DNS entry. When SQL Server comes online its a mirror image of primary site SQL Server. The actual time it takes to mount the volumes is almost instaneous. The disk replication is within one minute of the primary site and setup to be consistent across a set of disks. The particular setup we are running is nothing speical in that you can have EMC design a identical solution. Here's one more cool thing about using SAN based replication. Since its a SAN based solution we use the same technique for all of servers including Unix, Windows, web, app and of course SQL Server. On the Web and App servers we maintain dark servers and replicate teh boot partition i.e. the C drive. Because of MS cluster setup at the primary site and the lame setup of storing IP Addresses as part of the cluster configuration we cannot replicate the boot partitions of the SQL Servers. Although we have replicated boot partitions for non-clustered SQL Servers in the past, however as I now longer have non-clustered SQL Servers.

  • Hi all

    Has anyone used it to complement clustering? is it possible?

    We run a number of SQL clusters at prod data centres, they are SAN connected and also use SAN mirroring to the DR data centre. On top of this we also log-ship databases via a tivoli storage agent to the DR server(s). Has anyone used or considered database mirroring to complement/replace log-shipping within a clustered scenario in prod?

    This scenario is typical in the Oracle world, where the prod site runs RAC and uses data guard (where I see SQL Server's data mirroring competing as a technology).

    Thoughts/comments?

    Cheers

    Ck


    Chris Kempster
    www.chriskempster.com
    Author of "SQL Server Backup, Recovery & Troubleshooting"
    Author of "SQL Server 2k for the Oracle DBA"

  • Just wanted to give a big THANK YOU to Michael for assembling the non-domain member setup script - worked great!

    Re: the clustering/DM debate

    Depends on what you're doing - for me, DM was a better solution than clustering.

    Production site is 2 web, 2 db (+ witness)

    IT Team is 1 person (me)

    Application is brand new, so ADO/Connection strings are not a problem

    Low database count (< 5)

    Small database size (< 1 GB)

    All Servers are on VMWare - SQL cluster was more headache not to mention the licensing + hardware burden

    - Virtual Network removes many failure/redundancy issues

    - Hardware Failures handled by VMotion to an alternate ESX Server

    Greater peace of mind has been achieved! Windows OS failures are easily covered by this process, except a bad windows update of course

    The only thing missing is accessing the mirrored data for reporting - but that can be bypassed by replicating the mirrored database, which is next on my list.

    Mike

  • Jeff -

    There is no way to specify a named instance with the SET PARTNER = xxx, but you are able to use an alternate TCP port.

    Each SQL Server instance should use a different port for database mirroring.

    serverA 192.168.1.4:5022

    serverA/instance1 192.168.1.4:5023

    serverA/instance2 192.168.1.4:5024

    Hope that helps!

    Mike

  • Just wondering - has anyone actually used the high-safety mode on a production database? Under what server architecture and is there a serious or noticeable performance difference? .. any stats on this?

    I'd prefer to use the high-performance mode generally, but 'clever' old microsoft have made this an Enterprise feature and that's not within the budget for our server population.

    I note that the TechNet guide says;

    "Despite the close coordination between principal and mirror when safety is FULL, database mirroring is not a distributed transaction and does not use a two-phase commit."

    ... but really isn't that just semantics? The primary still waits for confirmation of the secondary commit ...

    As for the debate on DM vs Clustering... I think it's missing the point. Mirroring is a cheap DR solution. It's an intermediate step before the high end hardware replication kit. It's a better DR solution that replication, because it is a full mirror... and yeah it is just a suped-up log shipping. The key advantage is that it's so simple to work with.

    Finally, with respect to the login / job / maint plan transfer problem ... how about implementing a DDL trigger? .. Fire off the same code on the mirror server?

  • Read this DM best practice article from Technet: http://www.microsoft.com/technet/prodtechnol/sql/2005/technologies/dbm_best_pract.mspx

  • We have SRDF/S for our SQL Server 2000 and 2005 DR, both work fine. Due to distance constrain, we will have to look into SRDF/A. According to EMC documentation about SRDF/A, we have to be very careful with initial disk assignment, otherwise the roll failure will occur. Would you please elaborate more on how you design the disk allocation for SQL Server log and data? Thank you.

  • cmille19 (7/20/2007)


    That's correct at the DR site there is an identical server where all volumes are replicated at the disk level from one SAN to another SAN over two OC3 pipes using SRDF/A. The DR server does not have the disks mounted, they are unmounted and a series EMC commands are run to mount the volumes. The DR SQL Server is configured as follows:

    One time setup -- Use a temporary volume to install SQL Server service using same instance name and path as primary site. Set SQL Server service to manual at DR site, remove temporary volume, setup SRDF/A to replicate all volumes except the OS this includes the system databases (master...). The repliated volumes remain unmounted. In event of disaster or during several annual tests run EMC scripts to mount the volumes, start SQL Server service and swing DNS entry. When SQL Server comes online its a mirror image of primary site SQL Server. The actual time it takes to mount the volumes is almost instaneous. The disk replication is within one minute of the primary site and setup to be consistent across a set of disks. The particular setup we are running is nothing speical in that you can have EMC design a identical solution. Here's one more cool thing about using SAN based replication. Since its a SAN based solution we use the same technique for all of servers including Unix, Windows, web, app and of course SQL Server. On the Web and App servers we maintain dark servers and replicate teh boot partition i.e. the C drive. Because of MS cluster setup at the primary site and the lame setup of storing IP Addresses as part of the cluster configuration we cannot replicate the boot partitions of the SQL Servers. Although we have replicated boot partitions for non-clustered SQL Servers in the past, however as I now longer have non-clustered SQL Servers.

    cmille19, hope you see this! I will be setting up SRDF for our DR to replace log shipping, (non-clustered servers) and am looking for the best design to achieve fastest failover and failback. So your set up above sounds good to me as I wish if possible to replicate the system databases as well.

    A few questions, if you installed on SQL DR as the same instance name the server name must have been the same, so how do you get round having two servers of the same name active? Is the DR server renamed after initial install and again at failover time?

    How do you apply patches to SQL on the DR server, by replicating the C drive as well or splitting SRDF whilst you do it? Are you booting from SAN?

    cheers

    george

    ---------------------------------------------------------------------

  • george sibbald (5/3/2009)


    cmille19 (7/20/2007)


    That's correct at the DR site there is an identical server where all volumes are replicated at the disk level from one SAN to another SAN over two OC3 pipes using SRDF/A. The DR server does not have the disks mounted, they are unmounted and a series EMC commands are run to mount the volumes. The DR SQL Server is configured as follows:

    One time setup -- Use a temporary volume to install SQL Server service using same instance name and path as primary site. Set SQL Server service to manual at DR site, remove temporary volume, setup SRDF/A to replicate all volumes except the OS this includes the system databases (master...). The repliated volumes remain unmounted. In event of disaster or during several annual tests run EMC scripts to mount the volumes, start SQL Server service and swing DNS entry. When SQL Server comes online its a mirror image of primary site SQL Server. The actual time it takes to mount the volumes is almost instaneous. The disk replication is within one minute of the primary site and setup to be consistent across a set of disks. The particular setup we are running is nothing speical in that you can have EMC design a identical solution. Here's one more cool thing about using SAN based replication. Since its a SAN based solution we use the same technique for all of servers including Unix, Windows, web, app and of course SQL Server. On the Web and App servers we maintain dark servers and replicate teh boot partition i.e. the C drive. Because of MS cluster setup at the primary site and the lame setup of storing IP Addresses as part of the cluster configuration we cannot replicate the boot partitions of the SQL Servers. Although we have replicated boot partitions for non-clustered SQL Servers in the past, however as I now longer have non-clustered SQL Servers.

    cmille19, hope you see this! I will be setting up SRDF for our DR to replace log shipping, (non-clustered servers) and am looking for the best design to achieve fastest failover and failback. So your set up above sounds good to me as I wish if possible to replicate the system databases as well.

    A few questions, if you installed on SQL DR as the same instance name the server name must have been the same, so how do you get round having two servers of the same name active? Is the DR server renamed after initial install and again at failover time?

    How do you apply patches to SQL on the DR server, by replicating the C drive as well or splitting SRDF whilst you do it? Are you booting from SAN?

    cheers

    george

    I SRDF/A the system databases also. That way I don't have to attach databases just mount the EMC LUNs and start the SQL Server. To SAN replicate the system databases you'll need to install the DR SQL installation with the same path as the primary server.

    As far as server name. The server name is different on the DR server. I use strictly named instances for example Z002\SQL1, so just the instance name is the same. To avoid changing connection string the developers are instructed to use fully qualified DNS names for example Z002.acme.com. If the server is named instance the connection string would be the hostname followed by comma and the port number: Z002.acme.com,2507. In a DR scenario the DNS servers are sent a updated hostname to IP list which points to the DR IP with the original primary site DNS name.

    Before adoption clustering as an HA solution for my primary site, I replicated the boot partition. All of our Web and App servers use boot partition replication. Although I no longer replicate boot partition on SQL Servers, I'm still using boot from SAN.

    For patching, I'll have a second set of LUNs mounted on the DR server. These LUNs are used for QA environment. The system databases are the path as the DR and primary servers. A secondary benefit to this setup is that patches can be applied to both OS and SQL to keep in sync with primary server. In a DR scenario the QA LUNs are unmounted and the DR LUNs mounted.

  • cmille19, thanks for the prompt reply, appreciate it.

    sorry to bang on but I need to clarify some points. you are using SQL clusters yes? Because mine are standalone servers so I don't have a virtual instance name. I am struggling to understand why you don't need to rename the server or at least update sysservers in master and entries in msdb because the server name has changed. (the systemdbs have after all in effect come from a different server).

    updating msdb to run on a different server is easy in SQL 2000 but a nightmare in 2005.

    are you saying you have to use named instances?

    So your DR site is permanently used for QA, and if you failover you do without QA environment?

    As you are booting from SAN anyway you don't see attaching all the secondary LUNS to a piece of identiacal kit waiting to receive them aa a method of failing over?

    ---------------------------------------------------------------------

  • george sibbald (5/4/2009)


    cmille19, thanks for the prompt reply, appreciate it.

    sorry to bang on but I need to clarify some points. you are using SQL clusters yes? Because mine are standalone servers so I don't have a virtual instance name. I am struggling to understand why you don't need to rename the server or at least update sysservers in master and entries in msdb because the server name has changed. (the systemdbs have after all in effect come from a different server).

    updating msdb to run on a different server is easy in SQL 2000 but a nightmare in 2005.

    are you saying you have to use named instances?

    So your DR site is permanently used for QA, and if you failover you do without QA environment?

    As you are booting from SAN anyway you don't see attaching all the secondary LUNS to a piece of identiacal kit waiting to receive them aa a method of failing over?

    Yes, I'm using clusters. The reality is what is stored in sysservers for the local server is not important. You can run sp_dropserver and sp_addserver and have a server name other than the physical or virtual server name any every application will continue to connect and work fine. Keep in mind there are two server names in Windows. There is a NETBIOS/WINS name and host/DNS name. In DR scenario my AD administrators will update the DNS entry to point to the DR IP instead of the primary server IP. So in effect, I am renaming the hostname/DNS name. As long as all the connection strings use the DNS name all applications will continue to connect without needing to change connection strings.

    Yes, I'm using named instances, however the same concept applies to default instances. My only point about named instance is ensuring the instance name is the same on both the primary and DR servers. For example primary site SQL instance is Z002\SQL1 and DR instance is Z002DR\SQL1. The SQL1 named instance portion is the same on both servers.

    Yes, my DR servers are dual purpose as QA. We run about 4 DR failover tests per year and on those weekends the QA environment is unavailable. In the even of an actual DR failover there would be no QA. For that matter since my development environment is at my primary site, there would be no development either.

    Yes, I'm booting from SAN and although I no longer replicate the SQL Server boot partitions I continue to do so for web and app servers. Keep in mind if you replicate the boot partition the DR server must be dark and you must bring the primary server down or isolate the DR network to avoid duplicates server names as the DR server will come onliine with the same name. If I were not using clustering I would replicate the boot partitions. Also if you're an HP shop you can use RILOE to power on servers remotely.

  • In DR scenario my AD administrators will update the DNS entry to point to the DR IP instead of the primary server IP. So in effect, I am renaming the hostname/DNS name. As long as all the connection strings use the DNS name all applications will continue to connect without needing to change connection strings.

    Are you runnng SQL 2000 or 2005? I would have expected issues with msdb in 2005 in particular, as its maintenance plans would point back to the primary server and SSIS packages could potentially have incorrect connections.

    Yes, I'm booting from SAN and although I no longer replicate the SQL Server boot partitions I continue to do so for web and app servers. Keep in mind if you replicate the boot partition the DR server must be dark and you must bring the primary server down or isolate the DR network to avoid duplicates server names as the DR server will come onliine with the same name. If I were not using clustering I would replicate the boot partitions. Also if you're an HP shop you can use RILOE to power on servers remotely.

    So both primary and DR\QA boot from SAN but their boot partitions are independant? Any problems with boot from SAN? Interesting that you say you would go back to replicating the boot partition, would this not lose you your QA environment and the ability to patch the DR server without affecting the primary and SRDF, which sounds to me a good ability to have.

    regards

    george

    ---------------------------------------------------------------------

Viewing 15 posts - 16 through 30 (of 32 total)

You must be logged in to reply to this topic. Login to reply