• We use something very similar where I work. The key difference for us is that we have a table in the database to record which scripts have run. We execute a batch file in our script directory and it proceeds to run the SQL files in the same directory in list order (we prefix filenames with descending date format; multiples on one day would be 20120103a, b, and so on). Each SQL file consists of 2 parts: the wrapper SQL and the payload SQL. The wrapper sql executes a stored procedure to see if this script has already been run in the db. If it has not, the payload is executed and the script name is inserted into the database table. We also write the output of the cmd window to a text file.

    The main benefit of the table in the database is that it essentially 'versions' the database. We have dozens of environments in development and QA, and developers will restore newer backups at their own pace. This means all these databases can be from various points in time with scripts run up to the point the DB was backed up/sanitized from production. Since we draw from a common SQL folder in each branch, this allows us to automate running the scripts after any db restore, regardless of the date of the db. After restoring a db, just get latest code and execute the batch script. We archive SQL scripts quarterly in the script folder so a developer would need to update their db at that rate, which is quite reasonable.

    It should be noted that we do not use this process for production deployment, as we have a release DBA execute the scripts in order manually, although there is no reason this process could not be done in production. Although we run the batch repeatedly on restored test dbs prior to a release, things can happen in a moving target like production (moving wrt data) and we prefer to slow things down with single script execution. We have a GUI in our application that we can monitor the executed scripts table to verify that the dba is running them in order.

    The one hiccup we had recently was relating to updating some strings that contained some javascript syntax (with @ signs, I believe) and the SQLCMD was treating it like a variable. Since our scripts in production are run via SSMS, we could not use the same escape mechanism for SQLCMD since they are not compatible. We ended up breaking up the updates into a couple parts using concatenation of existing column values.

    There's probably a slicker way to do parts of what I have described. I'm a developer, not a DBA, though with many SQL server installations, the developer gets to wear that hat a little, and not always by choice.