Questions on Software Control Management and the Database

  • We currently run builds daily. We have our SQL model changes and procedures etc excuted during the build process via bcp currently.

    We have some issues where multiple procedures are grouped into a single file which makes a more agile method difficult, and we are looking at breaking the procedures into stand alone objects for the build as they should be.

    I wondered if anyone here has any experience with the agile approach in relation to database builds, and specifically using any tools such as DBGhost, or Visual Studio for database professionals etc for the schema change management as part of a build.

    There are a few approaches we could take, but each has its drawbacks.

    I am curious what other people are doing in this area. Any feedback is appreciated.

    Thank you

  • I worked on a team once that had a similar quandry.  We had an internally developed build/releasing program.  Our approach was to record issues and scripts that were related.  By default, the script name matched the object name.  This usually helped avoid duplicated or conflicting work, but in some cases it just wasn't practical to break up a large script into multiple pieces.  In those cases we included an object list with the script that could be checked against the objects specified for other scripts for duplication.
     
  • We keep every object in a separate file, as created originally by "Generate script". So for each table I have 4 files: .tab with field definition, .kci for primary key, .fky for foreign key and .ext for extended properties (most of these are empty). Each stored procedure is in it's own .prc file, views are in .viw, functions, triggers... The nightly build process grabs all files from the directory and creates a humongous SQL script consisting of all of them in the right order separated by GO statements.

  • Sergey, when you say '...most of these are empty)', are you saying the files exist so the builder can find them, but have 0 KB size? Just to be certain I understand your description. This would then result in something like...

    ..... preceding commands.....

    GO

    GO

    GO

    GO

    GO

    ..... subsequent comands.....

    as an example?

  • Sounds very similar to the process we are trying now.

    Our next step will be to create a true "model" database and try some shema compare tool.

    The drawback here being if you make a model change such as you want to add a not nullable column to existing data. You add it as nullable > perform some logic to fill the column > make it not nullable. I imagine a schema tool will just create a script to add the not nullable column that will fail.

    I have not found a good example of the schema compare tools being used as part of a build process yet.

     

  • I used a process where we batched the runs, grouping objects in an ordering scheme (tables before indexes, etc). We used to automate the deployment to QA and production on a weekly basis and it worked well.

    Different than Sergiy, we kept the object and it's permissions in the same file. We also copied the code in VCS to a new branch for each deployment, so we knew what had changed. The deployment file was maintained as we coded and verified against that deployment project.

    http://www.sqlservercentral.com/columnists/sjones/vcspart3.asp

  • If you figure out the schema tool thing, write me an article. I'd be interested to see that work.

    I think some people use it when they do an all or nothing deployment of things in the develpment environment. Since I've usually had things in flight that weren't being deployed, that wouldn't have worked for me.

  • Yes, the empty files will cause multiple GOs. But these don't hurt!

  • We have most objects in their own file, the exception being the foreign keys. We use a VCS to record changes and our own tool to build databases based on a list of scripts that we pass to the tool. The tool runs the files primarily in a pre-defined order, e.g. create tables, populate add PKs, FKs, etc.

    Our tool allows us to deploy a database to a new server in a matter of seconds, rebuild development databases and test and apply patches consistently.

  • Al,

    Which tool? I've used ER/Studio and ErWin. Didn't like ErWin, like ER/Studio, but both $$$.

  • Steve,

    It's a simple VB.NET command line tool that uses SMO/DMO to create databases and execute scripts. It takes connection details and an xml file that defines the scripts to execute as its parameters. As we've written it, we can determine the order in which are scripts are executed and add other functionality as we require it, e.g. checking files out of our VCS with a particular label and executing them all or copying an existing solution into a new solution and build the corresponding database.

    As we are a small operation we thought it would be faster, more efficient and very much cheaper to develop this solution in house.

  • Al -

    Have you taken a look at either SQL Compare from Red Gate or ApexSQL's tool?  I've used the command line version of SQL Compare in conjunction with NANT to automate schema comparison/modification as part of a nightly build process before to some good effect...  takes a little bit of doing but both products generate some pretty tight comparision scripts that handle dependencies (e.g. constraints) pretty well/most of the time.

    Joe

     

  • Al,

    Sounds similar to what I did with a few more bells and whistles. We had a more manual process that worked well with tight timelines and small releases (every week).

    If it works, stick with it. Definitely helps to own the process so a vendor upgrade doesn't kill you when you least expect it.

Viewing 13 posts - 1 through 12 (of 12 total)

You must be logged in to reply to this topic. Login to reply