Almost everyone struggles with setting up disaster recovery (DR) plans and resources. There are a few companies that take their DR seriously, but for most organizations, it's an afterthought. It's an insurance premium that can easily be avoided if there are not pressing problems and your past experience with disasters is minimal. After all, it's rare that any of our data centers shuts down because of an earthquake, hurricane, fire, or other similar large scale event.
However most of us try to have some type of disaster recovery in place. We may have cold or warm systems available. Our companies have funded an AlwaysOn Availability Group, or more likely, mirroring and/or log shipping for critical systems where data is moved to a remote location on a regular basis. We monitor these processes, and we try to keep them running, though I'm sure if they break, many people don't give the repair top priority in their daily work.
A DR environment is like a backup. If you don't test it, you're never sure if it really is something you can use in a disaster situation. You may periodically test your fail-over, like you test a restore, but do you ever really lean on the secondary system? This week, I wanted to ask you this question:
Do you fail over to your DR system and run your business from the system for a day (or week or longer)?
I know a few companies that consider secondary systems to be critical and will actually fail over, and then run the other system for a few months, failing back to then retest the primary system. In this case, there really isn't a primary and secondary system, but rather two systems that can work, being alternately used throughout the year.
This is actually moving closer to a cloud architecture model, where you don't place high importance on any particular system, you assume any system can fail, and you have redundant systems that can pick up the load. In a cloud environment you might have more than two, relying on dozens of systems instead, any of which could fail, but with a small interruption in service.
I'd hope that SQL Server, the version of the platform I can install in my data center, would get close to this, allowing me to serve database services to clients, but seamlessly moving those services across instances, with clients unaware when physical machines have crashed because their services just moved to another host.
Toda's podcast features music by Everyday Jones. No relation, but I stumbled on to them and really like the music. Support this great duo at www.everydayjones.com. You can also follow Steve Jones on Twitter to find links and database related items and announcements.
“With SQL Monitor, we can be proactive in our optimization process, instead of waiting until a customer reports a problem,” John Trumbul, Sr. Software Engineer. Optimize your servers with a free trial.
SQL Saturday is coming to Baton Rouge for a free day of SQL Server training and Networking on August 3.
There is also a pre-conference session presented by Bill Pearson on Practical Self-Service BI with PowerPivot for Excel on August 2nd. More »
From the MSDN Windows Azure blog - We recently introduced Azure CAT team series of blog posts and tech articles describing the Cloud Service Fundamentals in Windows Azure code project posted on MSDN Code Gallery. The first component we are addressing in this series is Telemetry. This has been one of the first reusable components we have built working on Windows Azure customer projects of all sizes. More »
If you've designed your SQL code intelligently, and implemented a sensible indexing strategy, there's a good chance your queries will "fly", when tested in isolation. In the real world, however, where multiple processes can access the same data at the same time, SQL Server often has to make one process wait, sacrificing concurrency and performance, in order that in order that all can succeed, without destroying data integrity. Transactions are at the heart of concurrency. I explain their ACID properties, the transaction isolation levels that dictate the acceptable behaviors when multiple transactions access the same data simultaneously, and SQL Server's optimistic and pessimistic models for mediating concurrent access. Pessimistic concurrency, SQL Server's default, uses locks to avoid concurrency problems. Get your copy from Amazon today.
Office/Administrative Assistant !
- :-) We are looking for a professional office assistant that is self-motivated, detail-oriented, energetic and highly organized. Prior office/administrative assistant experience...
DB Free space issue
we have a vendor DB which always keeps around 3GB of free space.
It is sql 2005 SP3 and DB Data...
I am looking for a script that will give all Heaps (user tables only ) in a sql server 2008 database....
How to dynamically create my FILEGROUP during a restore?
- How can I dynamically create my FILEGROUP during a restore?
My FILELISTONLY renders
[font="Courier New"]LogicalName PhysicalName Type FileGroupName
=========== ============ ==== =============
MyDBData_Primary G:\SQLData\Archive_Primary.mdf D PRIMARY
I have a table with some columns like
with some data.
so, can i add a Identity property to the sid...
Error inserting data with T-sql
I am inserting data from one table to another using update,set command and getting the following error.
SET XDDdepositor.WBeneName = Vendor.RemitName,
Prev and Next Row Without RowNumber
- Hello All,
Can anyone help me with this one.
here is the table sample.
CREATE TABLE [dbo].[Invoice_t](
[Cost_Center_code] [int] NOT NULL,
[Payment_code] [int] NOT NULL,
This newsletter was sent to you because you signed up at SQLServerCentral.com.
Feel free to forward this to any colleagues that you think might be interested.
If you have received this email from a colleague, you can register to receive it here.