By complete coincidence I was "on the outside" dealing with an issue relating to TSB on Wednesday and was very suspicious about the data. On Friday Phil's editorial turned up. By Sunday I had established that the problem was caused by a third party - not me, not the bank. The doubt in everyone's minds made the situation harder to deal with. Lesson for me: odd and unpleasant coincidences do happen.
I think good may come of this mess. I have already saved Phil Factor's editorial and may use it in the future.
The best practitioners of Agile I have known were absolute demons for repeated automated testing. It was a development project. _Everything_ was tested at several different levels. They were also rigorous about being able to roll-back. They had very few regression failures. That was development of new function.
Something Phil does not emphasise in his article (you can only cover so much in limited space) is the "data migration" aspect of this. Not only does "function" have to work, but the data has to be right too. When you have existing data, then migration can be a substantial project in its own right, and it needs to be tested too. You need to be able to _prove_ it has worked properly.
There are times when not being able to roll something back may be acceptable. I've done it myself and I've heard it referred to as "a success oriented strategy" (even at the time the description was intended to be ironic). If you cannot roll back then you have to accept the consequences of failure. In TSB's case that really shouldn't have been accepted.
As Phil says "it's horses for courses" or should be anyway.