I think you may have missed the humour in Tom's reply to you, but never mind.
There's nothing incompatible between that general BOL advice and what's been said so far. The key phrases are things like "for many applications" and "where exact numeric behaviour is required". In many financial applications (the client I was referring to uses MATLAB) double precision arithmetic is preferred because this hedge fund is looking for trends and shapes over time, in extremely large data sets.
The alternative internal format for our needs would be DECIMAL(38,20), which requires 17 bytes compared with 8. More importantly, processing hundreds of billions of records is at least an order of magnitude slower than using float. Naturally, we would not use floating-point arithmetic if it gave us wrong answers 😛
The excellent point Tom made is that floating-point numbers are an exact representation for integers over a very large range, a point that is not well understood by most DBAs.
So, is it is better to use floating point or a limited precision 'exact' numeric in a given monetary-value application? It depends, of course 🙂