Having ALL of your SQL objects, not just stored procedures, triggers and UDFs, under source control is the ultimate solution. This way every single object gets the benefit of check out/check in and a full version history, down to the last character - who, when, why and what.
Using a modelling tool such as Visio, ERWin etc. is great for maintaining a logical model but, when it comes to physically deploying the changes, most of these tools are flaky at best when coming up with a delta script of the changes. Have you ever tried to generate a delta script from the tool and actually run it to upgrade a target database? If your changes are anything other than simple then the chances are that the script will fail leaving you with a schema in an unknown state and a debugging job to do. Alternatively you could just hand code an alter script but that robs you of your valuable time.
The other problem with a development team of any size sharing a database model is with concurrent development. Do you have a single model file that each developer can check out/in? If so then your model file becomes a single threaded bottleneck that gets worse as the number of developers/DBAs on the team grows. This isn't such a big problem if the team is small and/or the number of changes to the schema is low but, unless your company is standing still, that kind of situation rarely continues. At some point the business will need to adapt to changing market conditions and that almost always means lots of database schema changes. You can't have different versions of the model file with several different developers updating it as that would lead to utter chaos.
A core problem with modelling tools, and any other process that relies on delta scripts, is with storing those delta scripts as part of the 'change package' in the source control system. Although they can be linked to all other changes made for a particular change request they are simply a 'new' script as far as the source control system is concerned. If you are using advanced configuration management concepts such as a main codeline and isolated worksets that get merged back into it when the piece of work is finished then delta scripts are useless for a visual compare and merge operation. For example, if you needed to add a column to a table you could either:
1. Code (or produce using a diff tool, modelling tool etc.) a delta script that states:
ALTER TABLE x ADD COLUMN y
give it a name such as ChangeRequest1254delta.sql and check it in as part of your change package.
2. Check out the drop/create script held in source control for table x, modify the CREATE statement to include the new column and then check it in as part of your change package.
If you use option 1 and another developer has modified the same table (also using method 1 but with a conflicting change) and has already merged the change package back into the main codeline in the meantime you will never see that change or be notified of it by the source control system. Only when both delta scripts have been executed against the test database and the functionality tested will any issues come to light. As we all know, the earlier in the development lifecycle that you can spot potential problems the cheaper it is to fix them.
This brings me on to the same situation but with both developers using method 2. When you perform your merge the source control system will immediately flag a conflict between your table drop/create script and the one checked in by the other developer. This gives you a chance to review the other developers changes, decide on whether it affects your change package and perform any necessary rework. This eliminates the wasted time (by the deployment and test staff etc.) that is seen in method 1. It is also much better to rework something when you have just finished it than it is some days (or weeks!) later when the test team tell you something is wrong.
Although I have used the main codeline/isolated workset paradigm here the underlying principles remain the same when working with simple source control systems such as Visual SourceSafe and pessimistic check outs.
The main advantage of this method of working is that it is extremely easy for developers as they just work in the same manner as they do for normal application code. Making a process simple to understand and easy to use is the best way to ensure it is adhered to, period.
The only problem with method 2 is that you need a magic black box that will look at a set of drop/create scripts and be able to apply those to a target database seamlessly without having to drop and recreate that database. i.e. you need something that can build a database from those scripts and use it as the source for a compare and upgrade of an existing database.
Once you have reached this logical conclusion you have to develop a tool. But how much would that cost in terms of money and resources to develop? Too much for most IT shops, I suspect.
Fortunately such a tool already exists and it's called DB Ghost (http://www.dbghost.com). It is the only tool on the market today that enables you to use your source control system for SQL in the same way that you do for all other application code.
Once you've 'clicked' with the power of this approach I guarantee you will never work any other way with SQL code again.
DB Ghost - Build, compare and synchronize from source control = Database Change Management for SQL Server