Best Practice - Change Control

  • Has anyone found any good articles, on best practices for implementing sql release management or change control. Or can someone share wisdom from past experience with implementing change management for SQL Server 2000 & 2005.

    Our Organization will soon be using ALDON for release control and I have yet to find a non-convuluted way to allow source control of user defined views, stored procedures etc.

    I read on the Microsoft site Change Configuration Release Management

    http://technet.microsoft.com/en-us/library/cc966506.aspx

    And there is some good information on source storage methods but there is not enough information to understand how to set up a good folder structure and how to move from one environment (development) to the next(production).

    Any information that anyone is willing to provide will be welcomed with open arms.

    Thanks in advance to anyone who takes the time to reply.

  • Best practices for SQL change control...

    Well, I think some of the structures are easier to handle than others, especially functions, sprocs, and views.

    I have typically built what I would consider code structures and build structures. They look a lot alike but are different where tables come into play..

    The code structure I use looks like

    \Product

    \Product\Version

    \Product\Version\ServerClass

    \Product\Version\ServerClass\SQLAgentJobs

    \Product\Version\ServerClass\DatabaseName

    \Product\Version\ServerClass\DatabaseName\Base

    \Product\Version\ServerClass\DatabaseName\Tables

    \Product\Version\ServerClass\DatabaseName\Views

    \Product\Version\ServerClass\DatabaseName\Sprocs

    \Product\Version\ServerClass\DatabaseName\Functions

    \Product\Version\ServerClass\DatabaseName\Synonyms

    \Product\Version\ServerClass\DatabaseName\InitialPopulate

    For me serverclass was tied to different servers at various levels, I had a server that served the front-end and others that served the backed, so they were broken up that way.. The goal of this structure was to be able to build any released version from absolute scratch, including any base lookup data which was populated from a script in the InitialPopulate directory.. The base directory usually has a script that contains a USE DatabaseName statement followed by login to build any required roles or add users to a role.. Also EVERY script ends in GO followed by a cr/lf. If somebody forgets, it gets checked out and fixed, no exceptions..

    For all structures (except sometime tables) I start from a template that contains header information and a drop/create logic setup. I don't edit the sprocs on the server, I edit the files and compile them, the server is the destination not the source. All of the files are checked into either VSS or TFS.

    My build structures are similar..

    \BuildNumberOrId

    \BuildNumberOrId\ServerClass

    \BuildNumberOrId\ServerClass\BuildScript

    \BuildNumberOrId\ServerClass\Work\SQLAgentJobs

    \BuildNumberOrId\ServerClass\Work\DatabaseName

    \BuildNumberOrId\ServerClass\Work\DatabaseName\Base

    \BuildNumberOrId\ServerClass\Work\DatabaseName\Tables

    \BuildNumberOrId\ServerClass\Work\DatabaseName\Views

    \BuildNumberOrId\ServerClass\Work\DatabaseName\Sprocs

    \BuildNumberOrId\ServerClass\Work\DatabaseName\Functions

    \BuildNumberOrId\ServerClass\Work\DatabaseName\Functions(EC)

    \BuildNumberOrId\ServerClass\Work\DatabaseName\Synonyms

    \BuildNumberOrId\ServerClass\Work\DatabaseName\InitialPopulate

    I use a set of cmd scripts similar to the one I have attached (sorry I can't give you the ones I use since I don't "own" them) to combine up all the *.sql scripts into larger scripts, I end up with a single sprocs script and a single views script, etc...

    In all cases but tables the code here is a copy of the source controlled version, shared then branched, this means that I know EXACTLY what version is in a particular build and the code in a build won't change because the underlying source did..

    Tables are a special case, rarely can we just drop and recreate once we have deployed. The tables scripts are hand crafted, taking us from one version of the structure to another, if the change is simple like a field addition at the end then they are very simple, but if there are fields added in the middle that need to be populated, then the table script handles the full transformation from one structure to another.

    The cmd scripts I wrote take all the *.sql scripts and combine them into larger units until I have a single script per server class. This script is tested and is deployed to the various environments. In the next post I will discuss the different environments..

    Sorry for the marathon post..

    CEWII

  • After much testing and soul searching I built a methodology that provided a basis for all server classes code to work in all environments unchanged.

    From SQL 2005 on we had the option of using synonyms which are effectively references they turn:

    LinkedServer.Database.Schema.Object into Schema.Object

    I only use them for border references, no database code ever references a user defined database directly, they ALWAYS occur through a synonym, only system databases tempdb, master, and msdb can be referenced directly and only for MS provided code.

    What this means is that the code doesn't have to change if a database name changes for even if a database is moved to another server only the border references change.

    In previous versions of SQL my linked servers on each server were the same names, defining a connection to a different server class. I differentiated them on each server by using server aliases that pointed at the correct server, ie: Linked server name ProcessMaster existed on all servers that filled the server class of Process Front-End, but an alias at the machine level made sure that on dev machines that ProcessMaster always pointed at the DEV Process Master and on QC machines pointed at QC Process Master machines. With the use of synonyms this isn't required but I still kind of like it.. Mileage may vary..

    My goal is for the code itself to NEVER change in any of the environments, what works in Dev works in QC works in Demo works in Prod.

    I try to leave the number of potential changes at an absolute minimum since every change is a potential risk. Also once environments are setup unless you make a process change the border objects are not likely to change..

    I hope this provides some insight, I am unsure what the MS best practices folks would think about this, I just know that it has served me well and I have been able to deliver fairly large upgrades with little or no issue. As I said before, mileage may vary..

    CEWII

  • Thanks for taking the time to provide feedback. The detailed information is not a problem at all better to have more than less information. We will probably go to some sort of scripting practice as this is the best way to leverage ALDON without bying any additional third party tools.

    Appreciate the fast response also.......

  • I did all this without buying tools, the only thing we had was VSS and later TFS.

    If you are doing smaller builds more often this model requires a fair amount of discipline and 1-2 people that act as gatekeepers or release coordinators. But you can generate high quality builds that transition you from one version of a set of structures to another without problems..

    CEWII

  • We've been using Microsoft Visual Studio Team Edition for Database Professionals. It allows us to hook right into the existing development metholodogies with labelling, version control, build #'s, the works. However, our team has also developed an in-house tool for automating running scripts, but building the scripts was a manual process, to a degree, and this runs out of source control. I've also used Red Gate's SQL Compare to integrate with source control.

    The key, the hard part, is figuring out how to do incremental updates that could lead to data loss. Adding a nullable column or updating a stored proc is easy. Modifying existing structure while maintaining the existing data in an automated fashion is hard.

    By and large, we've broken all objects down to their component parts so that you can maintain versions of each index, table, etc. Then, a stringent system for labelling the versions, so that you can recreate a complete database to any point in time, accurately. After that, it's a matter of generating the incremental deployment scripts, adding them to source control, and incorporating them into any type of build process. Then providing a way to run them automatically is almost easy.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey (8/5/2009)


    The key, the hard part, is figuring out how to do incremental updates that could lead to data loss. Adding a nullable column or updating a stored proc is easy. Modifying existing structure while maintaining the existing data in an automated fashion is hard.

    That right there is the trick. Most everything else is easy to implement. But the scripts to go from one structure version to another whenever it is anything harder than adding a nulable column at the end is always the problem. You need a human brain in their to make the decisions as to what is going to happen. Both Red-Gate and the VSTS DB Edition have pretty good SQL comparison tools, and VSTS allows you to compare a DB to your project and sync the two.. What I had typically looked for was how tables had changed and then I built the scripts to handle the conversions..

    CEWII

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply