SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 

Shipping Database Changes–T-SQL Tuesday #90

tsqltuesday

This is a good T-SQL Tuesday topic from James Anderson: shipping database changes. It’s especially poignant for me since I talk and present often on this topic. Much of my work at Redgate involves helping people implement DevOps for Databases, deploying changes smoothly and efficiently to databases.

I’ve got lots of stuff here, but let me go back to a previous position, many years ago before TFS, when Visual SourceSafe (VSS) was in heavy use in development. We implemented DevOps before DevOps was a thing, and included our database changes.

Smoothing Deployments

I worked at a startup company and we were trying to respond to requests from management and sales. As we grew from 4 developers to 10, plus a DBA (me) and a QA person, we needed to smooth out our process. Over a few months we got into a particular schedule:

  • Monday noon – Package up all completed changes for deployment to QA.
  • Monday afternoon – Discuss and plan changes for the following week
  • Tuesday-Wednesday – Bug fixes as needed
  • Late Monday to the following Monday – write code for the next deployment
  • Wednesday evening after 8 – deploy changes to production

I had a foot in both the development and operations world and had to reconcile the need for production to work and remain stable while also ensuring new changes could be deployed every week. We were a young, new company, and we often had database changes included each week in our package.

When I started, we would talk to developers on Monday, start to gather all the changes from VSS for our web application, script out all database code, and then deploy to QA. Inevitably, some code would be forgotten or wrong, and we’d track it down sometime between Monday noon and Wednesday noon. Then we’d (hopefully) have a good folder of changes that we could deploy to production on Monday night, manually running .sql files or copying web code to remote servers.

The Old Process

When I started deploying code with this group, I’d get some dinner Wednesday afternoon and then return to the office with 2-3 other developers to deploy code. I usually had notes from various issues that QA had discovered in their testing, sometimes altering scripts in real time to ensure they would deploy and work correctly on the production database.

Myself and the web developers would work to get code deployed in an hour or so each week, and things usually went well. We’d have hiccups and issues, but we worked through them and talent helped. It was a good team and we could usually solve our issues.

Getting Better

The lead developer and I both had little children at the time. Spending 12+ hours at work on Wednesday wasn’t an ideal situation for us, and we decided to get better.

The first thing we did was ensure that all code was tracked in VSS. We had most web code here, but there were always a few files that weren’t captured, so we cleaned that up. I also added database code to VSS with the well known, time tested and proven File | Save, File | Open method of capturing SQL code. This took a few months, and some deployment issues, to get everyone in the habit of modifying code in this manner. I refused to deploy code that wasn’t in VSS, and since our CTO was a former developer, I had support.

The other change was the lead developer and I started building a release branch of code each week. We’d move over the changes that were going to be released to this branch, which simplified our process. We could now see exactly which code was being deployed. This was before git and more modern branching strategies, but we were able to easily copy code from the mainline of development to the release branch as we made changes for this week.

Since some changes might be in development for a few weeks, we couldn’t just grab the latest version of every file. We needed to know which pages, and which database changes would be released and which were still in development.

Automation

Once we had the code tracked, we began to automate deployments. Both the web developer and I built separate scripting tools because we had different needs. Web code is easier, mostly just copying files to the correct locations. We had a few token replacement issues, but some creative scripting solved those.

For the database, I had a single application and a single database, which is a simple problem to solve. Deploying database changes were always taking the latest version of object code, which might be a CREATE, or it might be an ALTER. In my case, I separated code into folders (tables, views, etc.) and then added scripting to ensure that I deployed objects in order. For the most part, we could work in gross orders (Tables first, then views, then procs, etc.). We had a few items that were out of order, but hard coded deployment checks ensured these issues were handled.

My additional challenge was managing QA, which was my test environment for deployments as well as the application. I added processes that would automatically refresh the QA environment from production. This was a button click to start the process. Once this was done, I’d use my deployment process to deploy the database changes for the week from VSS.

If we found issues in QA, and code changes were needed, we went back to development, made the changes there, committed to VSS, and then repeated the process. With automation and a small database, we could rebuild the QA environment with a new package in about 15 minutes.

Going Live

The first couple times we deployed to production, the lead developer and I still went into the office and made sure the deployments ran. After two weeks of watching our process run in about five minutes, we realized this was silly.

For over a year, we would continue to deploy changes every Wednesday. The lead developer and I would get on the phone every Wednesday at 8pm. We’d VPN to the office and I’d deploy database changes (5 minutes or so) and then he’d run his application scripts (2-3 minutes). We’d check a few things and then sign off.

Life got much better, we built confidence in our ability to deploy, which meant the business trusted us to get changes out to customers every Wednesday. Perhaps most importantly, we started working more normal hours rather than crazy startup schedules. We could even make quick patch deployments on a Thur or Fri if needed, and because we could smoothly roll out changes, we were under less pressure to pile as much as possible into the next deployment. If something didn’t get done in time, it would just roll to the next week.

Modern Tools

I work for Redgate Software now, and we’ve spent a lot of resources (and brainpower) to try and help you deploy code in an easier way. Deploying database changes is hard, because we need to maintain the state of our system.

These days, I might still use a home-grown, scripted approach if I had one application and one database. However, I’d really consider using other tools, because they mean my developers don’t spent time doing simple scripting, they spend time solving application problems.


Filed under: Blog Tagged: deployment, DevOps, syndicated, T-SQL Tuesday

The Voice of the DBA

Steve Jones is the editor of SQLServerCentral.com and visits a wide variety of data related topics in his daily editorial. Steve has spent years working as a DBA and general purpose Windows administrator, primarily working with SQL Server since it was ported from Sybase in 1990. You can follow Steve on Twitter at twitter.com/way0utwest

Comments

Leave a comment on the original post [voiceofthedba.com, opens in a new window]

Loading comments...