Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««45678»»

Version Control your Stored Procedures Expand / Collapse
Author
Message
Posted Thursday, May 6, 2010 2:03 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, April 23, 2014 8:55 AM
Points: 6, Visits: 40
Jon McGhee (5/6/2010)
Wow, that's really labor intensive! Compare that to the integration between a visual studio c# project and visual source safe, where the checkout/checkin can be done with a mouse click.


Not really.

I guess it might be labor intensive if that's all you did all day. But if we're talking several files a day maybe a minute or two maximum?

I haven't used visual studio for a number of years but the J++ and C++ versions were slow and clunky. The VSS integration was shaky and the PVCS Tracker integration was worse.

I find the SSIS version different enough that they aren't that comparable but I don't think it supports managing non SSIS very wel such as deployment and packaging that I would send out to a customer. As such I am using that sort of checkout anyway.

I still support some stuff that uses the old MS Java VM as well. The only thing I use that visual studio for the compiling. Their editor isn't worth the headache.
Post #917400
Posted Thursday, May 6, 2010 2:09 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Friday, March 14, 2014 10:03 AM
Points: 11, Visits: 111
Great job Mike! This will really help us.

I’ve been looking for something like this. Now we can export our schema setup to a folder every night and fire off an SVN commit afterwards if there are changes. This would help keep a constantly versioned set of scripts for our databases.

Up til now I'm the only dba in our company that does this (there are three of us) and only when I take the time. This way we can not only do it for our own internal databases but all those maintained by our 3rd party software. Occasionally upgrades break our own scripts and having tools like this would allow us to quickly do diffs on the scripts to find the exact changes.

Thanks!
Post #917405
Posted Friday, May 7, 2010 1:53 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Thursday, July 24, 2014 9:15 AM
Points: 31, Visits: 403
Andy Llewellyn (5/6/2010)
We use RedGate SQL Compare to create all the scripts for a DB and this structure is then held in Subversion.
We then develop using QA (or whatever suits the developer).
When we need to commit back to the branch then the first step is to compare the script folder with the development DB and synchronise any scripts that need it. Then you commit to subversion in the normal way.

We find this works well for our needs at the moment.


Andy,
Have you had a chance to try SQL Source Control? This should save you the additional steps of having to use SQL Compare to compare your dev DB to a scripts folder and commit to SVN, as it integrates this as a single action in SQL Server Management Studio. It's in early access at the moment, so I wouldn't recommend relying on it in a production environment, but I'd really welcome any feedback on the tool and the workflow.

http://www.red-gate.com/products/SQL_Source_Control/index.htm

David Atkinson
Product Manager
Red Gate Software
Post #917692
Posted Friday, May 7, 2010 3:04 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, March 31, 2014 8:17 AM
Points: 110, Visits: 182
Thanks for +ve comments,

Anyone interested in sample code of what we did, take a look here
http://www.mediafire.com/?eim2mydnjti

Disclaimered as you'd expect.

Regarding the ongoing debate about "proper" source control usage, I think a lot of people are indeed missing something. I haven't seen anyone explain adequately how they prevent and detect accidents when rolling out changes to their production servers (e.g. authorized changes by dbo level users), nor how they can prove that what is in their source repository is exactly and only what is in their production environment.

Another example of where this sort of automation is useful is that by scripting to a folder and running a compare against a baseline, you can evaluate the performance and accuracy of a rollout by dba staff, and also have an exact list of changes made in a rollout that you can append to your change control schedule to show that what was meant to be changed is exactly what was changed. This captures those last minute "saves" or "tweaks" that would otherwise be invisible.

Finally, this also allows you to capture changes to more esoteric areas that in some environments are considered beyond the scope of a sql developers source control and falling into the reactive dba world, e.g. tweaked fill factors on indexes, statistics, etc, and can capture objects and settings that might not normally be created via a script (for example, mail profiles).

I wouldn't call the process here particularly onerous- I wrote the attached scripts in a day without extensive powershell experience.




Post #917724
Posted Friday, May 7, 2010 6:15 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Yesterday @ 6:30 PM
Points: 15,518, Visits: 27,900
mike.renwick-894639 (5/7/2010)
Thanks for +ve comments,

Anyone interested in sample code of what we did, take a look here
http://www.mediafire.com/?eim2mydnjti

Disclaimered as you'd expect.

Regarding the ongoing debate about "proper" source control usage, I think a lot of people are indeed missing something. I haven't seen anyone explain adequately how they prevent and detect accidents when rolling out changes to their production servers (e.g. authorized changes by dbo level users), nor how they can prove that what is in their source repository is exactly and only what is in their production environment.

There are no guarantees in life. But, in an attempt to meet that level of assurance you're asking for, we only deploy our code out of source control and access to databases past development are controlled. There is no dbo level access except from the DBA's. I wouldn't say we're at 100% compliance across all databases, but we're doing very well and have a high, call it 99%, assurance that what we think is in production, is in production.

Another example of where this sort of automation is useful is that by scripting to a folder and running a compare against a baseline, you can evaluate the performance and accuracy of a rollout by dba staff, and also have an exact list of changes made in a rollout that you can append to your change control schedule to show that what was meant to be changed is exactly what was changed. This captures those last minute "saves" or "tweaks" that would otherwise be invisible.

Finally, this also allows you to capture changes to more esoteric areas that in some environments are considered beyond the scope of a sql developers source control and falling into the reactive dba world, e.g. tweaked fill factors on indexes, statistics, etc, and can capture objects and settings that might not normally be created via a script (for example, mail profiles).

I wouldn't call the process here particularly onerous- I wrote the attached scripts in a day without extensive powershell experience.


I agree as far as automation goes. Also, we make it a point of retaining the scripts that were generated for the final rollout to production, in case there's any question in the future what it was that got deployed.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #917851
Posted Friday, May 7, 2010 6:51 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Monday, March 31, 2014 8:17 AM
Points: 110, Visits: 182
Makes sense.

Of course, I will add that meeting that level assurance is more important in our environment where there are only 3 IT professionals running the databases, none strictly speaking a pure-dba type (perhaps only one with any dba experience at all... no prizes for guessing who). Unfortunately it's impractical for me to be the only person to roll out scripts, as I'm not always in the office, so these sorts of setups are extremely beneficial in our smaller shop, where a high level of assurance is required, but there aren't the staff :)
Post #917895
Posted Friday, May 7, 2010 8:26 AM


Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Wednesday, July 23, 2014 3:19 AM
Points: 338, Visits: 1,421
Grant Fritchey (5/7/2010)

Also, we make it a point of retaining the scripts that were generated for the final rollout to production, in case there's any question in the future what it was that got deployed.

When we release to an environment we 'label' the files in PVCS so at a later date we can get any release that has gone into any environment just by getting the files by the relevant label.
Post #918005
Posted Friday, May 7, 2010 8:51 AM


Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, July 24, 2014 11:03 AM
Points: 1,182, Visits: 1,970
jacroberts (5/7/2010)
Grant Fritchey (5/7/2010)

Also, we make it a point of retaining the scripts that were generated for the final rollout to production, in case there's any question in the future what it was that got deployed.

When we release to an environment we 'label' the files in PVCS so at a later date we can get any release that has gone into any environment just by getting the files by the relevant label.
We cut a branch (we use Perforce) at "code freeze". Contained in that code branch is the application code (rich client, web tier, BLL, database changes, etc.) The schema creation (for new customers/databases) and update (for existing customers/databases) SQL scripts are then used for QA and eventual deployment. We also version stamp the schema in each database. So the upgrade script will fail right out of the gate if it is being applied to a database that is not the correct schema version. ALL changes to the database are done via scripts. So we know exactly what is running in production.



(PHB) I think we should build an SQL database. (Dilbert) What color do you want that database? (PHB) I think mauve has the most RAM.
Post #918045
Posted Friday, May 7, 2010 8:59 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Yesterday @ 6:30 PM
Points: 15,518, Visits: 27,900
jacroberts (5/7/2010)
Grant Fritchey (5/7/2010)

Also, we make it a point of retaining the scripts that were generated for the final rollout to production, in case there's any question in the future what it was that got deployed.

When we release to an environment we 'label' the files in PVCS so at a later date we can get any release that has gone into any environment just by getting the files by the relevant label.


Yep, same process here with TFS.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #918054
Posted Friday, May 7, 2010 9:50 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Thursday, December 20, 2012 1:03 PM
Points: 265, Visits: 589
mike.renwick-894639 (5/7/2010)
snip... how they can prove that what is in their source repository is exactly and only what is in their production environment. ...snip


One automatic way to do this is through database-level DDL triggers. That is, for every change there is a trigger that saves the code obtained from EventData(). The DDL trigger inserts these into an external db, which also collects similar changes from several databases. Then it's a matter of matching the contents of the field that stores /EVENT_INSTANCE/TSQLCommand of EventData() with the source.

See an earlier post on this thread for code showing a simple DDL trigger example using EventData().

This solution will have the effect of limiting the source to be one file for each object in the db. There also has to be some consistency in naming db objects and their corresponding files in the VCS. Comparing in SQL itself is slow unless we use CLR routines. CLRs can do fast DIFF on source file and the contents of that field and produce a nice table of all objects in production that differ from the source.

This method will also have the benefit of using a variety of VCS and not tied to a specific product. If you have multiple version control systems, then create one CLR for each to encapsulate that VCS's diff commands. Within each VCS also there could be different ways to compare: binary, line-by-line, UNICODE, etc. You can encapsulate these as well in the CLR. At various times, we had to integrate VSS, SVN, GIT, and Mercurial. Each time only the CLR needed tweaking while keeping the rest of the SQL side and VCS side unchanged.

We can also answer the question: which version of the source matches the production? How many changes were made since then? Which objects had most changes? Since all these are tied to dates and times, simple undo and redo can be implemented as well.
Post #918097
« Prev Topic | Next Topic »

Add to briefcase «««45678»»

Permissions Expand / Collapse