Thank you firstly for finally writing this DW article.
I believe that you must include in version 0 the slow methods of flattening the data in a preprocessed way and one or two aggregated methods of looking at the report data, (analysis reports). Although the stake holder doesn't explicitly ask for the additional methods it assists you in understanding the data in ways that you would find interesting. You essentially become even more engaged. While creating the aggregates you will invariably create faster, more efficient and elegant way of flattening and pre-processing.
As a programmer, the scripts I generate initially use many procedural lines and use many temp tables. While debugging I can use the temp tables to see how the data is being massaged. Later this gets condensed, some become CTE's. Lately I have also moved the filters to CASE statements as columns with 1/0 results. The advantages for both methods is that I can rerun the scripts repeatedly, daily etc. The CASE test columns allow me to see the results as a pie of sorts, the filtered and the unfiltered, quickly.
Version 1 has the above scripts morphed as an sp, params are refined, and the proc is Compared to a SSIS/ETL. Also Version 1 underlying base tables are expanded, to include aggregate/calc columns, in an effort to solve or produce additional reports that you have now gathered are needed. Then wash rinse and repeat.