A lot of this stems from the fact that a binary "floating point" representation cannot exactly represent certain decimal values. In particular, any fractional part of a number ending in "1" is a problem. This is because the representation can be thought of an polynomial expansion with powers of two as the divisors, e.g.:
0.125 is represented as 0/2 + 0/4 + 1/8 + 0/16 + 0/32 ...
With this expansion, "x.1", "x.01", "x.001", etc., are all impossible to express exactly, so the expansion approximates the number to the length of the bits allocated to the fractional part.
For .1, the expansion (numerators only) is 0 0 0 1 1 0 0 1 1 0 0 1 1 ... infinitely repeating the "0 0 1 1", but never reaching .1 exactly. Depending on at which bit you stop the expansion, you may get a different value than stopping at the preceding or following bit. In this example, the first 4 different successive values are: .0625, .09375, .097656, .099609, ...
For .01, .001, etc., the expansion is a different sequence, but still never exactly reaches the decimal value.
So - depending on the number of bits in your data type, the number of bits available for the data type(s) used in your computations, and whether the result is rounded or truncated, you can get MANY different values for a given number with a fraction ending in "1".
This isn't a T-SQL (or any kind of SQL) problem. It is basic binary arithmetic when dealing with fractional decimal values.
The real point of this is that if you are dealing with fractional decimal values, you need to pick the correct data type AND avoid accidental conversions to other data types. If you are writing financial software and need to be accurate to the penny (not just "close", but exact), many systems use a large integer data type (counting in pennies), or even a VARCHAR type, and do explicit conversions when mathematics are required.