You Always Have a Software Pipeline

  • Comments posted to this topic are about the item You Always Have a Software Pipeline

  • Over the past decade my role has changed from DBA to Data Engineer.  Put bluntly, I don't think I could manually execute a deployment  of my stack alone, certainly not in the timescales that many organisations take for granted with a robust pipeline.

    With the pipeline in place I can deploy with confidence more frequently than hourly if need be.

    In the old days there was a release document.  This highlighted a plethora of things

    • Every artefact to be deployed with their version numbers
    • The steps to deploy them
    • The schedule for deploying them
    • Who was responsible for each part of the deployment
    • Checks to be run to confirm successful deployment
    • The time at which a decision had to be taken as to whether to continue with the deployment or rollback
    • All but the last one for the rollback approach.

    This was why releases were a big event requiring many staff at 02:00 rather than between normal 08:00 - 18:00.  Thinking back, it was amazing that we ever got anything successfully deployed, though we did, and surprisingly frequently.

    We started to work out how to release in smaller chunks and to release without certain features being active.  That helped though deployments became 05:00 or 06:00 events rather than normal office hours.

    As Steve did, we wrote our own equivalent of FlyWay for deployment and rollbacks and devised mechanical tests for our code/data deployments.  Reliability and speed of deployment increased.

    There were other benefits too.  There is no arguing with a mechanical system.  You comply by its rules, end or story.  Everyone's code/data is subject to an identical standard.  All engineers are treated fairly.

    Over time there has been a steady increase in the range of things that a pipeline can (and must) check as part of the deployment process.

    • Basic code linting
    • Code quality gateways
    • Unit/integration testing
    • Security vulnerability testing - this is becoming increasingly important
    • Deployment testing to an environment spent specifically to test deployment

    If the pipeline needs to do something new then, once it is plugged in, every deployment from then on will benefit from the new facility.

    The pipeline is for the increase in quality you require for your deployments tomorrow.

    Think about what is involved in getting a new check into the organisational muscle memory of deployment.

  • This is a good reminder that any routine to move code or data from one place to another is a pipeline. However, I am tired of the manual pipeline approach used widely where I work. I'm involved in our third attempt to adopt a more modern DevOps approach, with repeatable and reliable pipelines. However, people's resistance to change and process change will I fear result in this attempt failing like the previous two. I've asked why people are so resistant to adopting a DevOps approach but have never received an answer. So, I'm left to speculate on my own. My guess is for some there's fear that a DevOps approach will eliminate their jobs. In other cases, I think it's a desire for all processes to remain fixed, at least until they leave or retire years in the future.

    Kindest Regards, Rod Connect with me on LinkedIn.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply