Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««1234»»

A Look at Database Mirroring Expand / Collapse
Author
Message
Posted Monday, July 30, 2007 4:12 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, March 4, 2014 12:27 PM
Points: 13, Visits: 19
Just wanted to give a big THANK YOU to Michael for assembling the non-domain member setup script - worked great!

Re: the clustering/DM debate
Depends on what you're doing - for me, DM was a better solution than clustering.

Production site is 2 web, 2 db (+ witness)
IT Team is 1 person (me)
Application is brand new, so ADO/Connection strings are not a problem
Low database count (< 5)
Small database size (< 1 GB)
All Servers are on VMWare - SQL cluster was more headache not to mention the licensing + hardware burden
- Virtual Network removes many failure/redundancy issues
- Hardware Failures handled by VMotion to an alternate ESX Server

Greater peace of mind has been achieved! Windows OS failures are easily covered by this process, except a bad windows update of course

The only thing missing is accessing the mirrored data for reporting - but that can be bypassed by replicating the mirrored database, which is next on my list.

Mike



Post #386464
Posted Monday, July 30, 2007 4:20 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Tuesday, March 4, 2014 12:27 PM
Points: 13, Visits: 19
Jeff -

There is no way to specify a named instance with the SET PARTNER = xxx, but you are able to use an alternate TCP port.

Each SQL Server instance should use a different port for database mirroring.

serverA 192.168.1.4:5022
serverA/instance1 192.168.1.4:5023
serverA/instance2 192.168.1.4:5024

Hope that helps!

Mike



Post #386466
Posted Monday, April 7, 2008 7:17 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Today @ 2:32 AM
Points: 46, Visits: 154
Just wondering - has anyone actually used the high-safety mode on a production database? Under what server architecture and is there a serious or noticeable performance difference? .. any stats on this?

I'd prefer to use the high-performance mode generally, but 'clever' old microsoft have made this an Enterprise feature and that's not within the budget for our server population.

I note that the TechNet guide says;

"Despite the close coordination between principal and mirror when safety is FULL, database mirroring is not a distributed transaction and does not use a two-phase commit."

... but really isn't that just semantics? The primary still waits for confirmation of the secondary commit ...

As for the debate on DM vs Clustering... I think it's missing the point. Mirroring is a cheap DR solution. It's an intermediate step before the high end hardware replication kit. It's a better DR solution that replication, because it is a full mirror... and yeah it is just a suped-up log shipping. The key advantage is that it's so simple to work with.

Finally, with respect to the login / job / maint plan transfer problem ... how about implementing a DDL trigger? .. Fire off the same code on the mirror server?

Post #480774
Posted Friday, October 31, 2008 4:33 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Tuesday, May 6, 2014 10:02 AM
Points: 339, Visits: 390
Read this DM best practice article from Technet: http://www.microsoft.com/technet/prodtechnol/sql/2005/technologies/dbm_best_pract.mspx
Post #594855
Posted Tuesday, December 30, 2008 2:58 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, March 4, 2009 9:19 AM
Points: 2, Visits: 30
We have SRDF/S for our SQL Server 2000 and 2005 DR, both work fine. Due to distance constrain, we will have to look into SRDF/A. According to EMC documentation about SRDF/A, we have to be very careful with initial disk assignment, otherwise the roll failure will occur. Would you please elaborate more on how you design the disk allocation for SQL Server log and data? Thank you.
Post #627686
Posted Sunday, May 3, 2009 5:03 PM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 3:15 PM
Points: 5,976, Visits: 12,886
cmille19 (7/20/2007)


That's correct at the DR site there is an identical server where all volumes are replicated at the disk level from one SAN to another SAN over two OC3 pipes using SRDF/A. The DR server does not have the disks mounted, they are unmounted and a series EMC commands are run to mount the volumes. The DR SQL Server is configured as follows:

One time setup -- Use a temporary volume to install SQL Server service using same instance name and path as primary site. Set SQL Server service to manual at DR site, remove temporary volume, setup SRDF/A to replicate all volumes except the OS this includes the system databases (master...). The repliated volumes remain unmounted. In event of disaster or during several annual tests run EMC scripts to mount the volumes, start SQL Server service and swing DNS entry. When SQL Server comes online its a mirror image of primary site SQL Server. The actual time it takes to mount the volumes is almost instaneous. The disk replication is within one minute of the primary site and setup to be consistent across a set of disks. The particular setup we are running is nothing speical in that you can have EMC design a identical solution. Here's one more cool thing about using SAN based replication. Since its a SAN based solution we use the same technique for all of servers including Unix, Windows, web, app and of course SQL Server. On the Web and App servers we maintain dark servers and replicate teh boot partition i.e. the C drive. Because of MS cluster setup at the primary site and the lame setup of storing IP Addresses as part of the cluster configuration we cannot replicate the boot partitions of the SQL Servers. Although we have replicated boot partitions for non-clustered SQL Servers in the past, however as I now longer have non-clustered SQL Servers.


cmille19, hope you see this! I will be setting up SRDF for our DR to replace log shipping, (non-clustered servers) and am looking for the best design to achieve fastest failover and failback. So your set up above sounds good to me as I wish if possible to replicate the system databases as well.

A few questions, if you installed on SQL DR as the same instance name the server name must have been the same, so how do you get round having two servers of the same name active? Is the DR server renamed after initial install and again at failover time?

How do you apply patches to SQL on the DR server, by replicating the C drive as well or splitting SRDF whilst you do it? Are you booting from SAN?

cheers

george


---------------------------------------------------------------------

Post #709122
Posted Sunday, May 3, 2009 7:10 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, April 18, 2014 8:01 AM
Points: 258, Visits: 701
george sibbald (5/3/2009)
cmille19 (7/20/2007)


That's correct at the DR site there is an identical server where all volumes are replicated at the disk level from one SAN to another SAN over two OC3 pipes using SRDF/A. The DR server does not have the disks mounted, they are unmounted and a series EMC commands are run to mount the volumes. The DR SQL Server is configured as follows:

One time setup -- Use a temporary volume to install SQL Server service using same instance name and path as primary site. Set SQL Server service to manual at DR site, remove temporary volume, setup SRDF/A to replicate all volumes except the OS this includes the system databases (master...). The repliated volumes remain unmounted. In event of disaster or during several annual tests run EMC scripts to mount the volumes, start SQL Server service and swing DNS entry. When SQL Server comes online its a mirror image of primary site SQL Server. The actual time it takes to mount the volumes is almost instaneous. The disk replication is within one minute of the primary site and setup to be consistent across a set of disks. The particular setup we are running is nothing speical in that you can have EMC design a identical solution. Here's one more cool thing about using SAN based replication. Since its a SAN based solution we use the same technique for all of servers including Unix, Windows, web, app and of course SQL Server. On the Web and App servers we maintain dark servers and replicate teh boot partition i.e. the C drive. Because of MS cluster setup at the primary site and the lame setup of storing IP Addresses as part of the cluster configuration we cannot replicate the boot partitions of the SQL Servers. Although we have replicated boot partitions for non-clustered SQL Servers in the past, however as I now longer have non-clustered SQL Servers.


cmille19, hope you see this! I will be setting up SRDF for our DR to replace log shipping, (non-clustered servers) and am looking for the best design to achieve fastest failover and failback. So your set up above sounds good to me as I wish if possible to replicate the system databases as well.

A few questions, if you installed on SQL DR as the same instance name the server name must have been the same, so how do you get round having two servers of the same name active? Is the DR server renamed after initial install and again at failover time?

How do you apply patches to SQL on the DR server, by replicating the C drive as well or splitting SRDF whilst you do it? Are you booting from SAN?

cheers

george


I SRDF/A the system databases also. That way I don't have to attach databases just mount the EMC LUNs and start the SQL Server. To SAN replicate the system databases you'll need to install the DR SQL installation with the same path as the primary server.

As far as server name. The server name is different on the DR server. I use strictly named instances for example Z002\SQL1, so just the instance name is the same. To avoid changing connection string the developers are instructed to use fully qualified DNS names for example Z002.acme.com. If the server is named instance the connection string would be the hostname followed by comma and the port number: Z002.acme.com,2507. In a DR scenario the DNS servers are sent a updated hostname to IP list which points to the DR IP with the original primary site DNS name.

Before adoption clustering as an HA solution for my primary site, I replicated the boot partition. All of our Web and App servers use boot partition replication. Although I no longer replicate boot partition on SQL Servers, I'm still using boot from SAN.

For patching, I'll have a second set of LUNs mounted on the DR server. These LUNs are used for QA environment. The system databases are the path as the DR and primary servers. A secondary benefit to this setup is that patches can be applied to both OS and SQL to keep in sync with primary server. In a DR scenario the QA LUNs are unmounted and the DR LUNs mounted.



Post #709141
Posted Monday, May 4, 2009 5:49 AM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 3:15 PM
Points: 5,976, Visits: 12,886
cmille19, thanks for the prompt reply, appreciate it.

sorry to bang on but I need to clarify some points. you are using SQL clusters yes? Because mine are standalone servers so I don't have a virtual instance name. I am struggling to understand why you don't need to rename the server or at least update sysservers in master and entries in msdb because the server name has changed. (the systemdbs have after all in effect come from a different server).

updating msdb to run on a different server is easy in SQL 2000 but a nightmare in 2005.

are you saying you have to use named instances?

So your DR site is permanently used for QA, and if you failover you do without QA environment?

As you are booting from SAN anyway you don't see attaching all the secondary LUNS to a piece of identiacal kit waiting to receive them aa a method of failing over?


---------------------------------------------------------------------

Post #709324
Posted Monday, May 4, 2009 7:00 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Friday, April 18, 2014 8:01 AM
Points: 258, Visits: 701
george sibbald (5/4/2009)
cmille19, thanks for the prompt reply, appreciate it.

sorry to bang on but I need to clarify some points. you are using SQL clusters yes? Because mine are standalone servers so I don't have a virtual instance name. I am struggling to understand why you don't need to rename the server or at least update sysservers in master and entries in msdb because the server name has changed. (the systemdbs have after all in effect come from a different server).

updating msdb to run on a different server is easy in SQL 2000 but a nightmare in 2005.

are you saying you have to use named instances?

So your DR site is permanently used for QA, and if you failover you do without QA environment?

As you are booting from SAN anyway you don't see attaching all the secondary LUNS to a piece of identiacal kit waiting to receive them aa a method of failing over?


Yes, I'm using clusters. The reality is what is stored in sysservers for the local server is not important. You can run sp_dropserver and sp_addserver and have a server name other than the physical or virtual server name any every application will continue to connect and work fine. Keep in mind there are two server names in Windows. There is a NETBIOS/WINS name and host/DNS name. In DR scenario my AD administrators will update the DNS entry to point to the DR IP instead of the primary server IP. So in effect, I am renaming the hostname/DNS name. As long as all the connection strings use the DNS name all applications will continue to connect without needing to change connection strings.

Yes, I'm using named instances, however the same concept applies to default instances. My only point about named instance is ensuring the instance name is the same on both the primary and DR servers. For example primary site SQL instance is Z002\SQL1 and DR instance is Z002DR\SQL1. The SQL1 named instance portion is the same on both servers.

Yes, my DR servers are dual purpose as QA. We run about 4 DR failover tests per year and on those weekends the QA environment is unavailable. In the even of an actual DR failover there would be no QA. For that matter since my development environment is at my primary site, there would be no development either.

Yes, I'm booting from SAN and although I no longer replicate the SQL Server boot partitions I continue to do so for web and app servers. Keep in mind if you replicate the boot partition the DR server must be dark and you must bring the primary server down or isolate the DR network to avoid duplicates server names as the DR server will come onliine with the same name. If I were not using clustering I would replicate the boot partitions. Also if you're an HP shop you can use RILOE to power on servers remotely.



Post #709841
Posted Tuesday, May 5, 2009 9:48 AM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 3:15 PM
Points: 5,976, Visits: 12,886
In DR scenario my AD administrators will update the DNS entry to point to the DR IP instead of the primary server IP. So in effect, I am renaming the hostname/DNS name. As long as all the connection strings use the DNS name all applications will continue to connect without needing to change connection strings.


Are you runnng SQL 2000 or 2005? I would have expected issues with msdb in 2005 in particular, as its maintenance plans would point back to the primary server and SSIS packages could potentially have incorrect connections.

Yes, I'm booting from SAN and although I no longer replicate the SQL Server boot partitions I continue to do so for web and app servers. Keep in mind if you replicate the boot partition the DR server must be dark and you must bring the primary server down or isolate the DR network to avoid duplicates server names as the DR server will come onliine with the same name. If I were not using clustering I would replicate the boot partitions. Also if you're an HP shop you can use RILOE to power on servers remotely.


So both primary and DR\QA boot from SAN but their boot partitions are independant? Any problems with boot from SAN? Interesting that you say you would go back to replicating the boot partition, would this not lose you your QA environment and the ability to patch the DR server without affecting the primary and SRDF, which sounds to me a good ability to have.

regards
george


---------------------------------------------------------------------

Post #710401
« Prev Topic | Next Topic »

Add to briefcase «««1234»»

Permissions Expand / Collapse