I got it wrong, though I have always maintained that binary representations are more accurate than decimal ones (which they are of course)
Sounds to me like the real problem here is that SQL SERVER treats DECIMALs too much as a strings rather than numbers. Treating them as strings rather than numbers even when you are doing arithmetic on them (the only reason why you "need" the absurdly high "precision" and get the resulting truncation issue in the first place) really indicates a lack of understanding the difference between the semantics and the representations of the data type.
Because there is nothing to stop a number representation based on the decimal system (instead of binary) to also use a floating decimal point. Nor is there any good reason (assuming you want to preserve the database representation for historical reasons) why SQL Server couldn't convert the disk-based strings into a format that treats the intended numbers as a numbers when it needs to manipulate them.
So it's a (documented) bug in my book.