Continuous Delivery In Real Life

  • Comments posted to this topic are about the item Continuous Delivery In Real Life

  • I'm glad you mentioned the culture change and emphasised the need to work on the processes. The tools and technology are there already.

    When you say continuous delivery do you mean all the way to production? I have friends and colleagues who use CD to environments beyond the CI environment but not all the way to production.

    It takes discipline to be able to do this. For example a robust strategy around source control. Only check in working code, check in frequently, get your branch/merge strategy honed etc. One recommendation I was given was to treat the source control trunk as sacrosanct, it should be the code that is in production.

    Source control and TDD/BDD are crucial to getting it all working.

  • At the Compass Group we use Continuous Integration to ensure the stability of our code base for both application and database projects. Although I was not comfortable with it at first, I have found the process very effective as the developer gets a email and a bug in TFS if the project code he or she checks in fails to compile on the CI server. This ensures that everyone working on the project can get clean code when pulling down the latest edition of the project for development.

  • It took us an average of 3 to 5 years to develop a high-end video game that had online capabilities with large-scale multiplayer (100+ players simultaneously) capabilities in a persistent (changing) world.

    This was the standard for many companies. Some even took longer than that. Some took as long as 7 to 10 years before they had something they could ship.

    Today, a lot of companies, especially in Asia and Europe, are able to do this in a much shorter time frame on multiple platforms. This is because of how far and mature technology has come in the past 10 years. Developers are able to use tools that basically allow them to push out solid builds to a product within a day and digital slices of games within weeks.

    But, that does not mean it should happen either. The rush to get a product out the door is normally due to a demand not attached to the good engineers and designers making the product. This is a need from those investing, publishing and ultimately selling the product from my experience.

    Due to that, bugs, unfinished content, broken systems and much more are quite frequent. Agility and speed of a product is certainly attainable, but doesn't mean you should in cases where speed can lead to a decrease in quality and most importantly, maturity of the product. That's because you're basically trying to rush your product from childhood to adulthood in order to get them out the house, into college and into the real world making money for you.

  • I don't have any experience in this area but the concept sounds pretty scarey. Changed code means changed behavior. And even if it passes the testing (can you really test in a day??), totally unanticipated issuces come up when you hit the real world.

    In early development it might be useful, but in production???

    ...

    -- FORTRAN manual for Xerox Computers --

  • We actually have a a pretty good process with daily builds in our development environment and scheduled builds in our testing and staging environments prior to release to production about once every four to five weeks. Where we stumble is that we just don't have the actual testing comprehensive enough yet. Problems still slip through to production.

  • Where I've been working for the past few years, we develope ETL and other supporting applications for a data warehouse. The source data originates from about 200 clients, the ingest files are not all standardized, and there are client specific metrics, programming, and custom reporting. Therefore, an XP development lifecycle and Continuous Delivery is not only routine but required. We just can't bundle all our various micro projects into one deliverable. At any given moment they have to be integrated, because a small change for one client can potentially break the entire ETL process for all clients if it's not planned, coded, and tested properly.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • jay-h (7/16/2015)


    ...even if it passes the testing (can you really test in a day??)

    With certain caveats yes you can.

    The idea is that you write the tests that expresses the business requirement and the technicalities of satisfying that business requirement. You then write the code to pass the test.

    For example if the business requirement is to enter a Date of birth for a person of 18 years old or over up to 120 years old then you need a test for valid dates, null values, leap year maths etc.

    As long as you have those tests that run as part of the build then you can test everything in a remarkably short space of time. You can run several thousand tests in minutes so if anything your app is more thoroughly tested.

    There are tools for user interface testing, data testing and infrastructure testing. Once you start down the automated test route you find that you explore new opportunities to test that you wouldn't have previously attempted.

  • It took us an average of 3 to 5 years to develop a high-end video game that had online capabilities with large-scale multiplayer (100+ players simultaneously) capabilities in a persistent (changing) world.

    This was the standard for many companies. Some even took longer than that. Some took as long as 7 to 10 years before they had something they could ship.

    Today, a lot of companies, especially in Asia and Europe, are able to do this in a much shorter time frame on multiple platforms. This is because of how far and mature technology has come in the past 10 years. Developers are able to use tools that basically allow them to push out solid builds to a product within a day and digital slices of games within weeks.

    Other factors contribute greatly to this too. 20 years ago, Microsoft wrote most of its own code. Now, it acquires tech and integrates it. That saves tons of development time.

    In the game industry 20 years ago, shops often created their own engines. Now, games are based on engines created by other companies and licensed from them.

    Both of these make turnaround on a new project much more rapid. Plus, a lot of the tools now used make the game development process much easier than ever.

    Plus, the movement to web-based apps makes the cross-platform aspect much easier to achieve.

    But, that does not mean it should happen either. The rush to get a product out the door is normally due to a demand not attached to the good engineers and designers making the product. This is a need from those investing, publishing and ultimately selling the product from my experience.

    Due to that, bugs, unfinished content, broken systems and much more are quite frequent. Agility and speed of a product is certainly attainable, but doesn't mean you should in cases where speed can lead to a decrease in quality and most importantly, maturity of the product. That's because you're basically trying to rush your product from childhood to adulthood in order to get them out the house, into college and into the real world making money for you.

    Indeed. And, one of the biggest factors for pushing out the door ASAP is ROI. Companies want it out the door and selling to offset the cost of upkeep and enhancement, and to be able to report that the revenue stream is flowing.

    I agree with you whole-heartedly. A product should be both mature and well-visited before release. But unfortunately, today's business world has pushed the tech industry to push products out before they're usually ready in an effort to start the ROI train rolling, and then "mitigate risk" by handling functional and security issues that present themselves down the line.

    But, a lot of things have become of lesser quality than they could be due to a "take what you can get" sense of consumerism...and not just in the tech world unfortunately.

    If consumers would more readily reject what wasn't exactly what they need or what doesn't work right, manufacturers would have no option but to make sure quality and maturity is there before releasing their wares.

  • @david-2 Poole

    Yes, all the way to production. This doesn't mean that a developer can release. A few people with a CD process still have gates and checks in the way. Continuous doesn't mean everything gets released or anyone can release. There are still ways for admins/DBAs to double check and schedule releases. It just means that you potentially could release any particular build of software.

    Automated also doesn't mean hands off. There can be approvals, but the actual release is automated after it's been approved.

  • xsevensinzx (7/16/2015)


    ...

    But, that does not mean it should happen either. The rush to get a product out the door is normally due to a demand not attached to the good engineers and designers making the product. This is a need from those investing, publishing and ultimately selling the product from my experience.

    Due to that, bugs, unfinished content, broken systems and much more are quite frequent. Agility and speed of a product is certainly attainable, but doesn't mean you should in cases where speed can lead to a decrease in quality and most importantly, maturity of the product. That's because you're basically trying to rush your product from childhood to adulthood in order to get them out the house, into college and into the real world making money for you.

    Very true. Just because you can do something doesn't mean you should. I always tell people that CI is useless without tests. It also doesn't work unless you start adding tests that catch the bugs that appear in later systems.

    CI/CD also doesn't mean no QA. Your CI builds that pass automated tests should then be available for QA people to run further tests, whether complex Selenium (or other) tests, or manual stuff. You need to be sure that you are producing something that doesn't work.

    It's a good idea, but like anything, some will abuse it to try and make a few $$.

  • jay-h (7/16/2015)


    I don't have any experience in this area but the concept sounds pretty scarey. Changed code means changed behavior. And even if it passes the testing (can you really test in a day??), totally unanticipated issuces come up when you hit the real world.

    In early development it might be useful, but in production???

    CD doesn't mean every change is released. If we did this with SSC (and I so, so, so wish we did), we could have code written today for the QotD system. It might pass CI tests. We might also have code that was written yesterday for the Scripts area. Today we could choose to release the Scripts code, but not the QotD stuff. Maybe we hold that in QA for a few days it goes out next week.

    This means a few things. First, that I can make changes quickly, and potentially release them. Maybe I stack of up changes and don't because I don't want to disrupt users, or maybe I release often, but I release, very, very small changes.

    This also means if I break something, I can fix it with a new release quickly. How fast can you send in a patch to users? I don't mean have the DBA go change production or the developer xcopy DLLs around. I mean make a patch, run it through your testing process and get it to users? CD makes it fast.

  • Iwas Bornready (7/16/2015)


    We actually have a a pretty good process with daily builds in our development environment and scheduled builds in our testing and staging environments prior to release to production about once every four to five weeks. Where we stumble is that we just don't have the actual testing comprehensive enough yet. Problems still slip through to production.

    I tackle this by writing a new test for the bug that got to production. Then when you develop the patch, it has a test. And you don't make that mistake again.

    Tests shouldn't be for everything, but for the things you do poorly. The bugs you (or your specific team) create.

  • Eric M Russell (7/16/2015)


    Where I've been working for the past few years, we develope ETL and other supporting applications for a data warehouse. The source data originates from about 200 clients, the ingest files are not all standardized, and there are client specific metrics, programming, and custom reporting. Therefore, an XP development lifecycle and Continuous Delivery is not only routine but required. We just can't bundle all our various micro projects into one deliverable. At any given moment they have to be integrated, because a small change for one client can potentially break the entire ETL process for all clients if it's not planned, coded, and tested properly.

    Testing method? tSLQt, Selenium, MS Unit Test framework, something else?

  • David.Poole (7/16/2015)


    jay-h (7/16/2015)


    ...even if it passes the testing (can you really test in a day??)

    With certain caveats yes you can.

    The idea is that you write the tests that expresses the business requirement and the technicalities of satisfying that business requirement. You then write the code to pass the test.

    For example if the business requirement is to enter a Date of birth for a person of 18 years old or over up to 120 years old then you need a test for valid dates, null values, leap year maths etc.

    As long as you have those tests that run as part of the build then you can test everything in a remarkably short space of time. You can run several thousand tests in minutes so if anything your app is more thoroughly tested.

    There are tools for user interface testing, data testing and infrastructure testing. Once you start down the automated test route you find that you explore new opportunities to test that you wouldn't have previously attempted.

    Plus it's not necessarily that code written today needs to be tested. You might write code Wednesday, test thur, release Fri. However code written Tues was tested on Wed, released Thur.

    Play with the cycle, do what works. There are no rules about every change being released, nor every cycle being the same. In fact, if you aren't learning from your process what works and adjusting the cycle/testing/cadence, you're doing it wrong.

Viewing 15 posts - 1 through 15 (of 43 total)

You must be logged in to reply to this topic. Login to reply