July 16, 2007 at 9:47 am
We currently run builds daily. We have our SQL model changes and procedures etc excuted during the build process via bcp currently.
We have some issues where multiple procedures are grouped into a single file which makes a more agile method difficult, and we are looking at breaking the procedures into stand alone objects for the build as they should be.
I wondered if anyone here has any experience with the agile approach in relation to database builds, and specifically using any tools such as DBGhost, or Visual Studio for database professionals etc for the schema change management as part of a build.
There are a few approaches we could take, but each has its drawbacks.
I am curious what other people are doing in this area. Any feedback is appreciated.
Thank you
July 17, 2007 at 2:16 pm
July 20, 2007 at 7:04 am
We keep every object in a separate file, as created originally by "Generate script". So for each table I have 4 files: .tab with field definition, .kci for primary key, .fky for foreign key and .ext for extended properties (most of these are empty). Each stored procedure is in it's own .prc file, views are in .viw, functions, triggers... The nightly build process grabs all files from the directory and creates a humongous SQL script consisting of all of them in the right order separated by GO statements.
July 20, 2007 at 8:27 am
Sergey, when you say '...most of these are empty)', are you saying the files exist so the builder can find them, but have 0 KB size? Just to be certain I understand your description. This would then result in something like...
..... preceding commands.....
GO
GO
GO
GO
GO
..... subsequent comands.....
as an example?
July 20, 2007 at 8:37 am
Sounds very similar to the process we are trying now.
Our next step will be to create a true "model" database and try some shema compare tool.
The drawback here being if you make a model change such as you want to add a not nullable column to existing data. You add it as nullable > perform some logic to fill the column > make it not nullable. I imagine a schema tool will just create a script to add the not nullable column that will fail.
I have not found a good example of the schema compare tools being used as part of a build process yet.
July 20, 2007 at 8:37 am
I used a process where we batched the runs, grouping objects in an ordering scheme (tables before indexes, etc). We used to automate the deployment to QA and production on a weekly basis and it worked well.
Different than Sergiy, we kept the object and it's permissions in the same file. We also copied the code in VCS to a new branch for each deployment, so we knew what had changed. The deployment file was maintained as we coded and verified against that deployment project.
http://www.sqlservercentral.com/columnists/sjones/vcspart3.asp
July 20, 2007 at 10:20 am
If you figure out the schema tool thing, write me an article. I'd be interested to see that work.
I think some people use it when they do an all or nothing deployment of things in the develpment environment. Since I've usually had things in flight that weren't being deployed, that wouldn't have worked for me.
July 20, 2007 at 9:49 pm
Yes, the empty files will cause multiple GOs. But these don't hurt!
July 23, 2007 at 12:05 am
We have most objects in their own file, the exception being the foreign keys. We use a VCS to record changes and our own tool to build databases based on a list of scripts that we pass to the tool. The tool runs the files primarily in a pre-defined order, e.g. create tables, populate add PKs, FKs, etc.
Our tool allows us to deploy a database to a new server in a matter of seconds, rebuild development databases and test and apply patches consistently.
July 23, 2007 at 9:03 am
Al,
Which tool? I've used ER/Studio and ErWin. Didn't like ErWin, like ER/Studio, but both $$$.
July 23, 2007 at 11:13 pm
Steve,
It's a simple VB.NET command line tool that uses SMO/DMO to create databases and execute scripts. It takes connection details and an xml file that defines the scripts to execute as its parameters. As we've written it, we can determine the order in which are scripts are executed and add other functionality as we require it, e.g. checking files out of our VCS with a particular label and executing them all or copying an existing solution into a new solution and build the corresponding database.
As we are a small operation we thought it would be faster, more efficient and very much cheaper to develop this solution in house.
July 24, 2007 at 11:15 pm
Al -
Have you taken a look at either SQL Compare from Red Gate or ApexSQL's tool? I've used the command line version of SQL Compare in conjunction with NANT to automate schema comparison/modification as part of a nightly build process before to some good effect... takes a little bit of doing but both products generate some pretty tight comparision scripts that handle dependencies (e.g. constraints) pretty well/most of the time.
Joe
July 25, 2007 at 5:06 pm
Al,
Sounds similar to what I did with a few more bells and whistles. We had a more manual process that worked well with tight timelines and small releases (every week).
If it works, stick with it. Definitely helps to own the process so a vendor upgrade doesn't kill you when you least expect it.
Viewing 13 posts - 1 through 13 (of 13 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy