• I came up with a DB deployment strategy years ago when we were a 3-person team, and we still use it now that the team has grown to about 15 or so.
    We have a QA environment that we make like live (copy down from live) at the start of each sprint test cycle. It is tested before deployment (old code, old DB). We generate deployment scripts from source control against that environment and run them. Testing is now old code, new DB (as that will be a situation we have on live come final deployment). Code is then deployed (new code, new DB testing). The scripts are saved - and we may end up with multiple scripts per database as bug fixes or late work items get added to QA. 
    Once the testing is complete, we make the UAT environment like live, and again test old code/old DB then deploy all the accumulated scripts to that and test old code/new DB, then deploy code and test new/new.
    Following that we deploy all those scripts to Stage, which has not been made 'like live' and instead exists in the same update-cycle as live to give one last check that the scripts will not cause errors on final deployment to live, which is the final step.

    We shortcut the process for emergency patches - as QA will be in the 'next sprint' state already we test patches on UAT and follow the process from there.

    This has worked well for a few years now. I know TFS, Red Gate tools etc. offer a 'deploy from source code to any environment' option, but I/we prefer the knowledge that we have thoroughly tested the exact deployment scripts both for accuracy/functionality and the deployment process itself has been multiply tested too. It could simplify deployments to use these tools, but at the cost of some security.