No State Based Deployments

  • Comments posted to this topic are about the item No State Based Deployments

  • The simplicity of deploying software is that there is no modification; you create, replace or remove a file. There are slightly more complex scenarios, registry settings for example, but it still a simple uncomplicated create, replace or remove of an item.

    It sounds obvious (mainly because it is): rarely does software need to consider data in the same way as a RDBMS.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Even if you are using the 'no-state' structural database changes, managed in my caser by SQL Compare, there are often a variety of additional scripts to run - setting up or modifying control data, doing some bulk update to data because field semantics have changed etc. Such releases are then a hybrid.

    In my case, where there are several target systems that have the same software but different control data values, the control data update scripts sometimes differ between target systems, adding to the complexity of releases.

  • We are trying to work out how to move from a "working direct on our single production platform" to "working on a dev/test platform and then deploy to production" setup. I am struggling with how, using any methodology, we can handle all the changes needed within the code base that reflect the differences between dev and production. For example, T-SQL code that references sprocs or data on linked servers, or platform-specific Service Broker services, etc. needs to be altered during deployment. And this needs to be done reliably, repeatably and "automagically" otherwise all the benefits of deploying a known, tested code base will be lost to us. The whole idea of "what's going to production is what we just signed off on dev/staging (call it what you like) is crucial.

    A simple comparison of "what is on prod" vs. "what is on dev" just won't work. But what would?

  • We go with both approaches for different circumstances. State based typically working between dev and stage, migration based to production for anything very complex.

    Typically though we will dry run migration scripts on a copy of the live database if we are going that route and then use SQL Compare to verify we have caught everything - I'd call it the best of both.

  • mhtanner (3/2/2015)


    ...

    Such releases are then a hybrid.

    I think this is also why everything is hybrid.

  • During the last 5 years in an SAP support role, I've often thought that a SQL server version of the SAP transport management process would be extremely useful. It tracks program changes, table changes, and in some cases, data changes. There are regression checks, the migration path is configurable, and the changes that get moved to production are not impacted by any testing done in acceptance. The concepts are worth looking at.

  • I found that SQL Compare and SQL Data Compare were quite good but the complexities of an environment making heavy use of replication made it difficult.

    I think if you are going to use a toolset you have to recognise that you have to engineer your solutions to be capable of using those tools. If you are going to use a cross-head screwdriver then you are going to have to use cross-head screws.

    My initial thoughts were that the compromises required to make it possible to use tools would hamstring a DBA but experience has shown that the result is a cleaner solution. Its somewhat similar to the concept of treating servers as cattle rather than pets.

    I've also used tools such as DBDeploy. When they work they are great. The problem is that when a deployment breaks half way through (could be a network glitch) you are left in no mans land with scripts that are not suitable to either forward fix or rollback.

  • We have a hundred plus customers, where our software and databases are installed on-site.

    All but a few of our customers are on the same major and point release.

    We have a build process that can build any version of our database, and upgrade from one version to the next, usually (but not always) without intervention. We have three types of builds - installation, upgrade, and point release (software only - no database changes).

    The "state" approach wouldn't work for us - when we completely replace a feature, we typically do it with new tables, and move the data over.

    It can be complicated, but it gets tested continually as developers add new features and I add new data structures and migration scripts. The check-in triggers a build, and some basic smoke tests get run as part of that automated process.

    And as the QA group does their testing, I use backups of our more complicated customer databases to do a final run-through with the completed code to make sure there are no additional gotchas.

    We used to use a state-based approach to compare one or two tables across versions to generate a migration script, but it quickly broke down even in with that limited usage, as identifiers between the two versions started to drift.

    The idea of having scripts to do the migration mean that it's easier to test, and it's more easily automated.

    I think we're fortunate in that we started with this approach early on, when we only had a few customers and a few developers. Without process like these, we'd be mired in long-running, error-prone upgrades. The more successful we got, the more it would cost us.

  • Bob Cullen-434885 (3/2/2015)


    We are trying to work out how to move from a "working direct on our single production platform" to "working on a dev/test platform and then deploy to production" setup. I am struggling with how, using any methodology, we can handle all the changes needed within the code base that reflect the differences between dev and production. For example, T-SQL code that references sprocs or data on linked servers, or platform-specific Service Broker services, etc. needs to be altered during deployment. And this needs to be done reliably, repeatably and "automagically" otherwise all the benefits of deploying a known, tested code base will be lost to us. The whole idea of "what's going to production is what we just signed off on dev/staging (call it what you like) is crucial.

    A simple comparison of "what is on prod" vs. "what is on dev" just won't work. But what would?

    Bob, we went through something like this (though not a lot of Service Broker needs) and started doing local Development (install SQL Developer edition locally) with SSDT SQL Projects. I've blogged about our process here. That may give you some thoughts. SSDT has its quirks (read Jamie Thomson's blog on SSDT/VSDB/DataDue experiences), but works well for us to ensure repeatable releases. We do have to manage data changes in a repeatable way ourselves, but the schema tends to work out pretty well.

    I remember one release gone bad where the devs swore that they had created all new columns when my compare showed up a bunch of differences. They meant that they'd renamed existing columns, but that wasn't caught until after the release when we had dropped and created a bunch of new columns. SSDT has a "refactor" ability so you can refactor within the same database without too much trouble. If your dev branch and DB branch are aligned, you should find it somewhat easy to keep everything in sync when you release. I would recommend tagging your prod releases somehow, though. That will help if you ever need to roll back your schema changes to what was there before.

    Obviously this isn't the best/easiest approach for everyone (see David in BC's post) but has helped us quite a bit in our releases where we don't have to worry as much about on-site releases.

  • Migrations based deployments are based on tracking each change to a database in a separate script and then executing each script, in order, on your production database. This is hard to do, requires discipline, and is what most people find difficult to do. Scripts get quite numerous, ordering is hard, and deployments become scary and largely unsuccessful without a lot of care. The scripts require tweaks and fixes to work efficiently, and you need smart DBAs and developers.

    What we do are migrations, but they are incremental migrations. Each database developer manages their own deployment script and attaches it to the TFS ticket.

    Doing a schema compare between development and production, and then generating a difference script; that's what's scarry to me.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell (3/2/2015)


    ...

    Doing a schema compare between development and production, and then generating a difference script; that's what's scarry to me.

    Many of us bear those sorts of scars Eric 🙂

    Mostly when it comes to production deployment I think schema compare should be used to verify everything is correctly aligned before letting it go wild. I'm uninclined to accept it unalloyed as a deployment tool in those cases. Moving to stage or between dev rigs is fine, prod less so.

  • Surely there must be a pre-production environment to test the deployment of database changes???

    I know that for some small shops that this is a luxury that can be rarely afforded but to counterbalance that it is usually acceptable to have scheduled downtime for updates.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Peter Schott (3/2/2015)


    Bob, we went through something like this ...

    Peter - Thanks for that. I'll have a good read through. This is not turning out to be a straight-forward as we thought it ought to be :ermm:

  • Bob JH Cullen (3/3/2015)


    Peter Schott (3/2/2015)


    Bob, we went through something like this ...

    Peter - Thanks for that. I'll have a good read through. This is not turning out to be a straight-forward as we thought it ought to be :ermm:

    It never is. 🙂 There are other options out there, too. DBUp integrates more into the coding area and usually is written to include rollbacks as well. DBDeploy - stage a bunch of scripts, but let it handle pushing them. I think there's a way to specify rollbacks there as well, but never tried to use it too heavily. I don't know how well either of those would work if you ever needed to cherry-pick DB changes. That gets tricky very fast.

    Similar to SSDT - Red Gate has SQL Source Control. If you're in the Red-Gate environment, that can be useful. You will need to make sure your local/Dev database is set to the right schema to get the changes captured correctly using SQL Source Control if I understand how that works. Switching branches could be problematic there unless you're editing the files themselves.

    I'm writing from my experiences so definitely biased towards SSDT/Datadude, but I realize it's not for everyone and it can be tricky to get started. Once you get set, though, it's well worth the effort.

Viewing 15 posts - 1 through 15 (of 36 total)

You must be logged in to reply to this topic. Login to reply