# Real to Float

• Naveen PK (8/12/2015)

Jacob Wilkins (8/12/2015)

These questions are always fun, since I have to do some playing with binary representations to make sure I fully understand what's going on. Fortunately in this case it's pretty straightforward.

The trick here is that while REAL will never display more than 7 digits, the 24 bit mantissa allows it to store representations of values that go far beyond 7 decimal digits. Combined with the fact that many decimal fractions have no exact representation in binary, you can get results like this.

In this case, 43.3906060284202 represented out to 24 binary digits is 101011.011000111111111011, which converted back to decimal is 43.39060592651367187500. Rounding to 15 digits, we get 43.3906059265137, the answer. It's not really a magical addition of digits; the initial value stored for the REAL already represented this number. When converted to float, we now get to see it rounded to 15 decimal digits instead of 7.

It's important to note this doesn't really have to do with the fact that you're converting to the approximate float type, or the visible truncation of the decimal number to 7 digits by assigning the value to REAL. It has to do with the inexact binary representation of the decimal fraction, coupled with the availability of only 24 bits to store it.

With respect to the target data type, if @real is converted to DECIMAL (15,13), you get the same result, for the same reason. The binary representation to 24 digits of precision doesn't exactly represent the original decimal fraction.

With respect to the truncation of the decimal version of the number, you can confirm that by changing the value to a simple decimal fraction that has no exact representation in binary, 0.1.

`DECLARE @real REAL=0.1`

`SELECT CAST(@real as FLOAT)`

`SELECT CAST(@real as DECIMAL(38,37))`

If the initial variable declaration is for a float, having 53 bits for the mantissa means that it takes a bit more work to find numbers whose binary representations are truncated in a way that results in a different number being displayed, but it's still not too hard.

`DECLARE @float FLOAT=9999.1`

`SELECT @float`

`SELECT CAST(@float as DECIMAL(38,34))`

Cheers!

Thanks Jacob, this is really useful. Amazing explanation.

Thank you for the question!

• Sean Lange (8/12/2015)

The issue here is that the value was truncated when it was a REAL. Then the value changed a little bit when being inserted in a FLOAT.

When the the character string is used to insert a value into a REAL it is converted to REAL with loss of precision if the REAL type is incapable of representing the real number represented by that character string, and that happens here. When the REAL is inserted into a FLOAT, the exact value of the REAL is preserved - it is not "changed a little bit", it stays the same. You can check what value the REAL has by adding "SELECT STR(@REAL,16,13);" to the question script after it has been declared and given a value, and if you do that you will see that its value is exactly the value that the FLOAT it is assigned to gets.

The explanation with the answer has got that wrong too ("then implicit conversion adds the decimal values to become 43.3906059265137. Exact logic of how the decimal portion got added is unknown" is pure nonsense) but otherwise it's a good question because it adresses a point that 90% of SQL programmers don't understand (because they've heard too many myths about floating point).

I wish Microsoft would add support of floats where the exponent represents a multiplier which is an integer power of 10 (which has been in the FP standard for a long time now) to SQL Server, so that we could stop having the representation problem. Perhaps then all the stupid myths would go away because the real problem that caused the myths to be invented would be eliminated (but probably not - the other SQL myths don't seem to be losing their following).

edit: I see that DAVID W has already shown that there's no evil magic adding bits by producing the 24 bit string that is the value of mantissa of both the REAL value and the FLOAT value (the FLOAT value has the same mantissa value, the mantissa bit pattern is padded to 53 bits with 0s when the REAL is assigned to the FLOAT and the padding with 0s doesn't change the mantissa's value). It's easier (although not as complete a proof) just to look at the values using STR appropriately.

Tom

• Sean Lange (8/12/2015)

wedneyyuri (8/12/2015)

Good question.

So, what's the correct way to do the conversion?

`DECLARE @real REAL= 43.39061;`

`SELECT @real`

`SELECT CONVERT(FLOAT, @real)`

`SELECT CAST(@REAL AS FLOAT)`

If you care about maintaining the value the right way is to use an exact numeric instead of an approximate one. 🙂

No, that is an oversiimplification and an overgeneralization. It depends on the context.

In for instance a financial system, you would be right. Money is a very non-approximate value.

But in a scientific application, you would be very wrong. In science, it is all about the number of significant digits, which depends on the accuracy of the measurements. And for those applications, float and/or real are perfect. In that context, the answer to the question asked by wedneyyuri would be: there is no correct way, because the conversion would be invalid. In a scientific application, the number of significant digits can only go down, never up, so any conversion from a floating point value with less digits to one with more digits would be an error in the application design.

Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
Visit my SQL Server blog: https://sqlserverfast.com/blog/
SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

• This is why I never use aproximate values.

Being in a financial field, I've never really had the need to use those.

---------------
Mel. 😎

• Hugo Kornelis (8/13/2015)

Good one. Thanks for sharing.

Thanks,
Naveen.
Every thought is a cause and every condition an effect

• A lot of this stems from the fact that a binary "floating point" representation cannot exactly represent certain decimal values. In particular, any fractional part of a number ending in "1" is a problem. This is because the representation can be thought of an polynomial expansion with powers of two as the divisors, e.g.:

0.125 is represented as 0/2 + 0/4 + 1/8 + 0/16 + 0/32 ...

With this expansion, "x.1", "x.01", "x.001", etc., are all impossible to express exactly, so the expansion approximates the number to the length of the bits allocated to the fractional part.

For .1, the expansion (numerators only) is 0 0 0 1 1 0 0 1 1 0 0 1 1 ... infinitely repeating the "0 0 1 1", but never reaching .1 exactly. Depending on at which bit you stop the expansion, you may get a different value than stopping at the preceding or following bit. In this example, the first 4 different successive values are: .0625, .09375, .097656, .099609, ...

For .01, .001, etc., the expansion is a different sequence, but still never exactly reaches the decimal value.

So - depending on the number of bits in your data type, the number of bits available for the data type(s) used in your computations, and whether the result is rounded or truncated, you can get MANY different values for a given number with a fraction ending in "1".

This isn't a T-SQL (or any kind of SQL) problem. It is basic binary arithmetic when dealing with fractional decimal values.

The real point of this is that if you are dealing with fractional decimal values, you need to pick the correct data type AND avoid accidental conversions to other data types. If you are writing financial software and need to be accurate to the penny (not just "close", but exact), many systems use a large integer data type (counting in pennies), or even a VARCHAR type, and do explicit conversions when mathematics are required.

• Another good question, thanks.

Viewing 7 posts - 16 through 21 (of 21 total)