• I'll try this again. Intel processors (and probably most other computer hardware) use IEEE standard floating point representations. Single precision (REAL) uses four bytes and provides 24 bits to represent the value. Double precision (FLOAT) uses eight bytes and provides 53 bits for the value. The rounding issue discussed in this post is not due to the ROUND() function, it happens when the character string "165.755" is first converted to floating point. This means it should work the same in any programming environment on any computer hardware using the same IEEE standards.

    Given the character string "165.755", the system computes the floating point value. Internally in the CPU, I believe calculations are done to 80 bit accuracy. The result is then shortened (with rounding) to 24 or 53 bits, depending on the resolution. In the case of 165.755, the 25th bit is a 1 so the single-precision value is rounded up and is a little larger than 165.755. The 54th bit is a 0, so the double-precision value is rounded down and is a little smaller than 165.755. This happens when the characters "165.755" are first converted to a floating point representation and have nothing to do with the ROUND() function.

    This is why floating point types are called approximate types, and are not used where the rounding in the last decimal place is critical. If you need exact decimal values you need to use DECIMAL, NUMERIC, or MONEY. The behavior may look random, but it is completely consistent and predictable. You just need to get out paper and pencil and figure out the binary representation of all the floating point values you plan to use to 80 bits or so.