Setting up Dev, Test & Prod Servers

  • Hello,

    At present, we only have a production server with no development or test servers. I know this should be a criminal offence, but until now, the SQL Server was not the most important server...but now it has turned into a critical Server, and because of this I have persuaded the manager to upgrade the system. We now have a Dev & Test server being built. However, there are still concerns here.

    Basically, I have been told there is to be 0% downtime. We have one solution which is to have a completely fault tolerent solution which brings a nice price tag for the server and yearly support. One manager likes this idea and another doesnt and wishes to go for a cheaper alternative of having a constant replication of data to the test server, so in case of failure, we can switch servers.

    Personally, I am more in favour of the Fault Tolerant server. The VP has already agreed to the spending of the money on the better server, so why not go with it is my point..!

    Also, with regards to Prod & Test servers. I'd like to get some feed back as to how often your test servers are refreshed with data from the prod server and how do you do this? Log Shipping is the method I am looking to implement.

    Thanks in advance for your feedback.

    Clive

  • The cheaper alternative that was proposed was to use Sunbelt Software's Double-Take software.

    If anyone has any experience of this for SQL Server implementations, I'd be interested to hear any positive/negative points about it too...

    Thanks

    Clive

  • Havent used it, but Im in favor of using SQL replication rather than something low level like Double Take. At least I can modify/troubleshoot replication! Perhaps other readers will have a different view...? 0% down time is VERY expensive. One tip I've picked up is to ask if downtime Christmas morning from midnight to 5 am would be acceptable - that gains you quite a bit and illustrates whether or not its truly 0 percent.

    Andy

  • Replication is pretty straight forward, but as has been discussed in another thread, one of the problems is resyncing the data once the "production system" comes on-line.

    Log shipping is expensive if you're going to let SQL Server 2000 handle it for you, because the requirement there is Enterprise Edition, which isn't cheap. Of course, if you're looking for a clustered set of servers, EE is also required.

    If the goal is the heralded "5 Nines" then you're going to lose time switching over to the test server. What basically has to happen is that all the clients have to be repointed. Also, you'll need to make sure all backup processes, etc., that are normally running on the production system will have to be duplicated on the test system. You can have the backup schedules down ahead of time, but repointing the clients will have to be at time of failure.

    Andy makes a good point in that you need to clarify when the system has to be available other than the blanket "0% downtime." If you truly need as close to 0% as possible, you're probably looking at a clustered solution with redundancy in the controllers, etc.

    K. Brian Kelley

    bkelley@sqlservercentral.com

    http://www.sqlservercentral.com/columnists/bkelley/

    K. Brian Kelley
    @kbriankelley

  • Bit more information for you guys...as I did miss a crucial point.

    By 0% downtime, this is for business hours (8am - 8pm).

    The solution that is favoured is an Fault Tolerant server with everything hot swappable and the servers are constantly monitored remotely by the vendors for any possible signs of server failure. Everything inside the server is duplicated so if one part fails, the other part takes over....

    The reason there is a big issue for the server never going down during working hours is due to the fact that were updating our AS/400 also as well as hosting our shipping system...so if it goes down, we cannot ship anything.

    However, my point in this case, is if we do have the FT server, and all of the SELECTS/INSERTS/UPDATES/DELETES are controlled properly with transactions, then we should have no problem. We will just suffer a little downtime, while we restore from the last backup and reapply any missing transactions.

    Clive

  • I like the sound of that fault tolerant set up and it could be all you need to ensure their are no failures during working hours.

    As for testing and developing servers. We normally refresh databases on development and staging after a major rollout to production to ensure all databases are the same as production. That way when the next development cycle starts we are working with stored procedures as they are in production. We have also begun renaming the old databases with OLD_ in front of them so that projects that are being developed don't get lost and can be scripted to the refreshed database.

    Robert Marda

    Robert W. Marda
    Billing and OSS Specialist - SQL Programmer
    MCL Systems

  • Sorry for the delay, but let me weigh in here.

    0% downtime isn't possible. No system in the history of the world runs with 0% downtime for any length of time. That's why the telcos and other systems shoot for 99.999%, which is like 8 minutes a year. They may miss in a given year, but can hit this over time. Even the nuclear plants, which use triple redundancy, have downtime. There is no way to run without a single point of failure.

    That being said, a number of good solutions have been proposed here, but there are some other questions that haven't been asked. How big is this system? How are the clients accessing this system? Is is web based or client server? The reasons are:

    If client-server, using ODBC/DSN/ini files, log shipping or replication will work, but the time to get the clients to switch (as noted by Brian) will probably exceed any back-end downtime.

    Also, the size of the system matters. With my production system, it's a 400MB database. This can be restored in minutes, with the max # of transaction logs applied in a few more minutes. If this is a 20GB database, it's a whole different story. Also, how big does your transaction log get in 1 minute? If you run Log Shipping you could get into trouble if you generate a 20MB log in a minute. Not that you do, but you should know. You might need a clustered solution.

    Lastly, keep in mind you need to test this setup as well. So you need a second "set" of your solution for testing. You could get away with MSDN for $2k or so a year for testing, but this is another cost.

    I haven't used DoubleTake, but I've heard great things about it. A number of hardware vendors have licensed the technology, so I'm sure it's in products I don't know about. I wouldn't be opposed to it, but you still have to fire up the second server and re-point the servers. Downtime...

    Clustering is probably the best option, IMHO, for minimal downtime, but it's complex, leaves you open for issues with Service Packs, and requires more time to monitor. However, it's the best solution for minimal downtime. Just be sure you have some excellent fault tolerance on the disk system.

    Steve Jones

    steve@dkranch.net

  • One angle of 100% uptime that was not considered, is replicating your database accross sites. This way if a site takes an outage (power, disaster, etc.) your replicated database will be intact at another site, hopefully a distances aways, an you can be back online in short order.

    We do transactional replication from site to site to provide for disaster recovery.

    Replication also allows you to load balance better - run reporting tools, query tools off the replicated database and the OLTP stuff on the publisher. WOrks pretty good for us.

    Cheeers.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply