Hi Andy - I mentioned cross continent as an example to illustrate my point about a source control system being necessary as it doesn't care about time or distance boundaries.
Terry - I'm not specifically talking about DB Ghost here - I'm making a point about source control and how it should be used.
However, in answer to your question; DB Ghost and it's way of doing thing works with ANY source control system.
This is evidenced by our major customers who are using it with VSS, CVS, Subversion, Vault, PVCS Dimensions, Telelogic Synergy, MKS SourceIntegrity, Perforce, Microsoft TFS and Seapine Surround. All DB Ghost actually requires is a set of CREATE scripts that describe the entire schema on disk somewhere to start the process off so which SCM those scripts have come from is basically irrelevant. As the developers make their changes only to the object creation scripts no diff scripts are necessary and there is, therefore, one single source for the schema. It is also easy to look at the schema history in the SCM as the list will show the names of the actual objects that were modified rather than a sequentially numbered set of diff/change scripts.
I think you're doing your company a favor by having a solid process in place. Adding additional software into your process often requires your process to change without significant benefits in control or quality. Keep it simple, keep it flexible.
We also use a similar process to yours, but with some minor differences. The differences are probably based on our different organizational structure and methodology, but fundamentally the same.
Database developer sandbox.Shared Application developer playground.QA/Testing environment.Production/Customer delivery.
In the database developer sandbox, the developer can go hog-wild here. When they have tested their solution, they push it into the next mode (or zone). For tables and data, the scripts that they write are simply the diff scripts to get a table from one build to the next. We do not modify the Create table script in order to generate a change script. It's not that hard to write a change script to modify one or more tables and manage any data migration that is necessary as the result of the schema change. So I don't see the need for software to produce diff scripts. Also, we have setup a naming convention for SQL files that allow any developer to know if another has also changed the same object among the "myriad" of changes that build up over many months or years. Once a table is modified using a change script, you can easily generate the Create Table script in your sleep -- from SSMS, SMO, or other free utilities. We only use the Create table script for record-keeping purposes. All new database are created from a "clean" database backup, maintained by the DBA and created by the change scripts (after they leave the QA/Test zone). All of this is done using batch files, osql/sqlcmd, and SQL script files -- and all of the bat and sql files are also under the same version control (We use MS VSS, but it doesn't matter which one you use).For textual objects (stored procedures, views, functions) we don't bother with Alter statements. We simply have Drop-n-Create statements in the SQL file that defines the object. The db developer simply modifies the syntax in the file and checks it back into version control, where it will be ready for the next step. The only special-cases for these, are dynamically created objects and functions used in Check or Default constraints.
We automate (batch files) pushing builds into QA/testing zone. The automation labels/marks all DB, application, and unit-testing code in source control. The labeled version is then pushe into QA/Testing. Part of the automation runs scripts to check for conformance to standards and inclusion of certain files or scripts. Because of the automation and version control, we have better consistency and quality. Automation also means we spend less than 2 minutes running and documenting a build! Additionally, our final push into Production/Customer delivery is the equivalent of the last QA-Approved build, so there's almost no additional work involved there.
Good luck, Terry.
Just wanted to be clear that I'm not against Red Gate's products. Rather, I like them.
But I've seen Development groups get too excited about buying new, cool-looking software that may not be the right fit for them, or ends up not being utilized to its fullest potential.
It's all about trying it on to see if it fits.
Sometimes, I wish that the money that was spent on certain purchases had gone into my salary instead.
Don't get me wrong - your process isn't bad (sorry if I came across badly back there) - it's just not as optimal as it could be and, in fact, we aren't very far apart in most of our thinking. Perhaps a little history of where we came from might help...
<shimmering effect> We started out writing DB Ghost because we had virtually the exact same process that you use and, althought we were just five guys sitting in the same cubicle, we still managed to create problems when moving to the QA environment such as re-using primary key values for static data, overwriting each others logic changes and writing new code against columns that had been removed by someone else.
We said "we didn't have these problems with release 1.0" why do we get them now? The answer, of course, is that in R1 we just worked on the create scripts and then threw away & rebuilt the database whenever we did a release. So, we mused, all we needed was a black box that can just look at our create scripts, automatically work out what is different, and then make those changes to the target database without losing any data. We had some full-and-frank discussions over whether it was possible - some toys were thrown, harsh words were spoken and some were even sent to bed early. However we decided to write the "black box" and that's where DB Ghost came from (actually there was nearly a punch up over the name as well so maybe some counselling is in order ) </shimmering effect>
The custom scripts feature (before and after sync) is there so that any data migrations etc. (i.e. things that require semantic knowledge) can be slotted in.
Generally speaking the developers all work in their "wild west" sandbox databases and when they're ready, they check in their changes, backup the sandbox database and then use DB Ghost to update their sandbox database from the latest set of CREATE scripts from source control. Better yet, if the scripts are extracted from a baseline/labelled set then you have a known point to work from in source control to answer questions like "what has been checked in since I last synched my database"? As a purist a would suggest simply rebuilding the sandbox database from the scripts and then re-populating the data to avoid those annoying issues you get with corrupted data giving phantom failures during testing.
Another subtle benefit is that DB Ghost not only adds new objects and updates existing ones but it also removes anything that isn't in the source. Thus it tidies up your local database whereupon you can do a last run through unit testing to absolutely make sure nothing was missed from the check-in...