Jshapiro 32900 (11/3/2015)
At our company, our policy is to backup the entire database before any script is applied, no matter how small the change. Whilst the policy is conservative and safe, it also uses a lot of resources in terms of time and server space.
I take a view based on recovery time, and the likelihood of having to make a Restore, or even multiple restores.
For a quick fix we take make a Transaction Backup (to "empty" the transaction log at that point-in-time) then a Differential Backup. At various points in the rollout we would take a Log Backup,. and explicitly add a comment to it indicating what stage of the rollout we had got to (we have an SProc that makes the Log Backup and it takes an option "comment" so its trivial for us to make a log backup during the rollout:
EXEC MyADminDB.dbo.usp_MyBackup @MyDBName='xxx', @MyComment='XXX'
For a large rollout (within scheduled downtime) I will always start with a full backup. That gives me the fastest recovery time if one of the rollout steps goes wrong and I have to restore-and-repeat (compared to having to, also, restore a DIFF backup). We also have (built into the rollout scripts) various points at which we make a Log Backup. That allows me to Restore Full and "Roll forward" various LOG backups up to the last-known-good point in the rollout process and then resume from there. Of course I am not expecting to have ANY issues during rollout, as it has all been tested, but in the real world we sometime get some data added, since the test, which conflicts with the rollout, so its mostly about insurance.
We use redgate to compare our dev and live environments and to create the change scripts.
Personally I don't like that approach. We script everything (rather than "just doing it" live in the DEV database and then scriupting it later with a COMPARE tool). Each SProc / Trigger / View has its own file (added benefit that they are stored in a Revision Control system). Any DDL changes we make are scripted - i.e. new/changed Tables, Column, Indexes, FKeys etc. We use the SSMS GUI Table Design to prepare changes (lazy mode!) but then press the SCRIPT button, rather than the SAVE button, and then we run the script on DEV - which proves that it works. Thus all such scripts are available, in chronological order.
I think this approach also has the advantage that if a DDL change ALSO needs an UPDATE script to "massage" the data, then that is all in the DDL script files, and that too is in chronological order. (I presume that a Database DDL Compare tool would not know about such data-changes and thus you would have to handle those separately)
At rollout we run all DDL scripts, in order, and then all SProc / Trigger / VIEW scripts in modify-date-order (we concatenate them into a single "Rollout Script" so we can easily run that single script on TEST and then identically on PRODUCTION (for us that is usually numerous separate-client Production databases, the approach may have less value for a single application/database rollout).
If we find that we have Chicken-and-Egg and we get a "X does not exist" type error then we rearrange the missing object earlier in the script (or the offending object later 🙂 ), restore the DB and run the script again. i.e. the script should be "clean" when we need to run it on Production in the future. This allwos us to take care of any FKey relationships that cause updates to need to happen in a particular order and so on.
I guess we could create “reverse” scripts to restore if necessary
I've never been in the situation where we had to do that, and it has always seemed like a huge amount of work to me - in the sense that it is very difficult to test that it will work in all possible rollback circumstances! But for any environment where you might want to rollback after some time has elapsed AND keep all new data/changes, then it would be essential. I just think that a rollout that splits ColumnA into ColumnA1 & ColumnA2, and all the data entry screens change to the New Way, makes it very difficult to built a bullet-proof "reverse" script.