Always Canary

  • Comments posted to this topic are about the item Always Canary

  • I've been looking at a product called "Docker" which allows you to spin up a pre-configured machine. One of the guys in the DevOps team used it to demonstrate the ability to spin up an entire software stack in seconds.

    The implications of the technology are profound. If you have a large number of tests that must be run but with conflicting scenarios then you could spin up many instances of your application and test in parallel rather than having to schedule them separately and in a linear fashion. Being able to do this profoundly affects the scale and scope of the tests you can run.

    It also introduces the concept of immutable machines. You don't patch machines, you create a machine of a known configuration, deploy your application to it and test to destruction. You then have a machine of a known working configuration on which to base your Docker instances. The technology can mean that it becomes physically impossible to have discrepancies between a QA and production environment because you fundamentally deploy exactly the same thing.

    As ever data poses challenges that simply don't exist in applications. Some of which (such as populating the database) are surmountable with a bit of head scratching. Others such as generating realistic and realistically distributed data are tricky in the extreme

  • David.Poole (12/3/2014)


    I've been looking at a product called "Docker" which allows you to spin up a pre-configured machine...

    Microsoft has announced that they are going to support Docker and use it for Application delivery/hosting. Knowing MS, as we all do, I fully expect them to use it for far more purposes than that. Hopefully not overly so.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • I am sure that many people will have experience of using feature flags, albeit possibly under a different name, for a long time. I have decades ago (as a junior programmer i.e. I was just doing as I was told which was a long standing practice).

    In development in the last decade there has been a popular technique to dynamically load modules based on configuration thus allowing changes to systems at runtime. Updated features and even new features have been introduced to systems without the need to change the rest of it.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • In our organization we have several VM's with databases restored from production. They include a Local Development Server, Continuous Integration (CI), QA, UA, Mock, Test, Sandbox and DeployableBackup. All of these VM's are updated through TFS Build processes. when we make a schema change to the database, the change is tested as a minimum on our own development server. Here we compile the change and publish to our local server. QA and UA are built using TFS Build and are tested by QA and Users respectively. Our build process then takes the schema objects and scripts and builds them on production. DeployableBackup is a backup from Production with several tables truncated for use on Development machines.

    Developers can restore from any given platform for troubleshooting purposes.

    Large organizations, having enterprise licenses from Microsoft have TFS available. This may be an effective solution to the issue of live database schema changes as the changes may be easily rolled back without a data loss.

  • My applications handle these issues as such:

    1. tblSetting contains columns set_name, set_value (and of course set_id*) to store settings that turn features on/off in the application, allowing the application to work differently for different customers simply by updating the settings in their database.

    2. I also store the version number of the application in tblSetting. At startup, the application compares it's version to the one stored in tblSetting and if the database is an older version, the program knows it needs to run certain functions containing DDL/DML to get the database up to date. This can in most instances happen in a live environment, with users actively using the system, and without the need for a second database. Occasionally, major updates require asking users to exit the system for a few minutes.

    I naturally assumed this was how everyone was handling these scenarios. Seems like the best approach when your product is not centrally located/cloud based, and it has worked very well for me.

  • SQL Server will typically require a schema lock when altering tables, which can be problematic if the deployment involves altering muliple tables and the changes need to be applied in tandem with active users. Trying to wrap the whole thing into a transaction will insure all or nothing, but with active users currently accessing the database there is a high probability that I'll be blocking them, they'll be blocking me, or both (!!!) for an extended period of time.

    Honestly, there are occasions where I'll just send out a notification earlier in the day informing all users belonging to an email group that there *might* be a temporary outage later in the evening, and then I'll begin the deployment off-hours by setting the database in RESTRICTICTED_USER mode WITH ROLLBACK for all active user connections. All things considered, that actually insures the least downtime. If someone's late night query aborts, they'll usually just shrug it off and start it again the next morning. However, if a planned deployment doesn't happen (or get's botched half way), then I have to explain that to management.

    BTW, what does the title "Always Canary" mean; like the bird or the islands ?

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • David.Poole (12/3/2014)


    I've been looking at a product called "Docker" which allows you to spin up a pre-configured machine. One of the guys in the DevOps team used it to demonstrate the ability to spin up an entire software stack in seconds.

    Docker is very, very cool.

  • Gary Varga (12/3/2014)


    I am sure that many people will have experience of using feature flags, albeit possibly under a different name, for a long time. I have decades ago (as a junior programmer i.e. I was just doing as I was told which was a long standing practice).

    In development in the last decade there has been a popular technique to dynamically load modules based on configuration thus allowing changes to systems at runtime. Updated features and even new features have been introduced to systems without the need to change the rest of it.

    I've rarely seen this. It's a feature that's used, but not often. Far, far too many features never get flagged.

  • Running DDL's against a production database is like changing a tire while the car is still moving. The data and the schema are so tightly integrated. In my old DBA and development days, we all wanted a way to upgrade, test, and roll back if necessary without taking the system offline. If we could come up with a fix, we'd be rich.

    The more you are prepared, the less you need it.

  • Why is the article titled "Always Canary" ?

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • I believe Steve is referencing the old 'Canary in a Coal Mine'. Miners would test air quality using a bird. If it dies or becomes distressed get the "heck" out of there!

  • John Hanrahan (12/3/2014)


    I believe Steve is referencing the old 'Canary in a Coal Mine'. Miners would test air quality using a bird. If it dies or becomes distressed get the "heck" out of there!

    yes

  • Steve Jones - SSC Editor (12/3/2014)


    John Hanrahan (12/3/2014)


    I believe Steve is referencing the old 'Canary in a Coal Mine'. Miners would test air quality using a bird. If it dies or becomes distressed get the "heck" out of there!

    yes

    So the passive node or replica is turned on and gets updated first, acting as the metaphorical "canary". I'm guessing it then gets used as primary, perhaps in read-only mode, while the usual primary gets updated.

    Minimal Disruption for Azure SQL Database During Application Rolling Upgrades

    http://msdn.microsoft.com/en-us/library/azure/dn790385.aspx

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • In the case of db updates/changes if the passive system gets 'distressed' then you can fix it. If you put it in production and if fails then you 'die' and need to find another job.... 😀

Viewing 15 posts - 1 through 15 (of 17 total)

You must be logged in to reply to this topic. Login to reply