I think that it was a worthwhile attempt to set out procedures for a task like this. But, honestly, I found it a bit difficult to follow. Every case is different, some models evolve, other need to run to strict frameworks. I work by the evolution model myself. I take more time over it, but there are less staff involved. I am trying to summarise the main points from my view.
1. All data sources should be recorded. Either that or the original raw data should be kept.
2. All code that alters this data should be kept. In my case this is either Perl code or T-SQL code, in the form of SPROCs.
If the above is done sensibly, in both file directories and databases, why worry?
I have a few other tips.
1. Keep a log of errors that everybody has access to.
2. Name the different versions of the system that is being implemented.
3. Keep all involved through the use of email updates, one for each major new release.
4. Keep testing/using the system with real life data requests.
5. Separate the development of the user interface (in my case an Excel VBA application), the request file handling systems (Perl based) and the analytical database (MS SQL Server)
6. Enable a record of all results that have been returned to users.