Nice question. Illustrates the pitfalls of the decimal numbers combination rules (which are a consequence of the silliness of the definition of the decimal type) with a simple piece of arithmetic which throws away 5 orders of magnitude of accuracy for no good reason at all.
The sooner we get hardware and software which supports the modern floating point standard instead of decades outdated nonsense (for both decimal and floating point) that forces us to choose one wrong type or another because the right type isn't available the better. But I don't see any sign of it happening, partly because almost everybody is wedded to one side or the other of the "lets to decimal/floating point because floating point/decimal is hopelessly wrong" wars and aren't even aware that the latest IEEE standard provides the best of both worlds and the inherent faults of neither. And partly because the hardware won'tbecome common until it's clear that the software will use it, and the software companies probably won't plan to use it until after both (a) the hardware has become common and (b) the ignoramuses on standards committees wake up and add a datatype that represents it to the SQL standard and lots of other langage standards.