SQLServerCentral Editorial

When Work Isn't Done

,

Software development can be a challenge for each of us with lots of demands and the need to ensure your code solves a problem correctly, efficiently, and completely. Juggling the workload by yourself is one thing, but add in a team of developers, and the complexity quickly grows.

The real world is chaotic and despite the best efforts of project managers and scrum masters, our software development life cycle doesn't always proceed smoothly. I wonder how many of you run into this situation and how you deal with it.

Developer 1 gets a piece of work, let's call this A. They complete this and send it to the QA team. Somewhere during this process, Developer 2 get's a different piece of work (B) and writes code. They send this to QA before A is completely tested.

Now, Developer 1 finds a mistake. Something doesn't work or they realize their solution is incomplete. QA is A + B, but A doesn't work and needs revision. B passes testing and needs to be deployed. If your codebase and QA have both A and B in them, how do you strip out A or B and ensure B is deployed to production but A isn't?

If this is C# or Java, you might have one solution, even if both changes are in the same class. If this is database code, you might have a different set of issues to deal with.

Really, the question is can you reorder work in your deployment process? I find many customers don't consider this when they are evaluating their software pipeline. They somehow assume if code gets to QA that it's good, which is nicely optimistic but not realistic. At some point, we'll deploy code to QA that doesn't work. The more developers we have, the more likely this is, and the more demands on our time, the more likely we need to reorder work and release one thing but not another.

As a DB developer and DBA in a company 20 years ago, I built a process and forced us to reset the QA environment and redeploy B only (using a branch of code that stripped out A) for re-testing. This ensured that we tested what was going to be deployed. However, I find a lot of organizations can't do this or don't want to. They want to hope that a human and either extract all of B or strip out all of A and release partially tested code without issues.

I find that to be a poor idea. In this era of regular staff changes, staff of varying quality, and the high complexity of software, this is asking for mistakes. With cheap hardware, virtualization, and the ability to provision copies of environments, we ought to do better.

How do you handle this today? Depend on humans to not make mistakes? Hope for the best? Or follow a repeatable, reliable process that accounts for inexperience and human error?

Rate

5 (1)

You rated this post out of 5. Change rating

Share

Share

Rate

5 (1)

You rated this post out of 5. Change rating