Fun with real and float

  • Dave62

    SSCertifiable

    Points: 6224

    Comments posted to this topic are about the item Fun with real and float

  • Hugo Kornelis

    SSC Guru

    Points: 64645

    There's probably a much deeper explanation than this, which I'm hoping will come to light in the discussion.

    Not much depth needed. See http://msdn.microsoft.com/en-us/library/ms173773.aspx and check the number of bytes used to represent real and float.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Mighty

    SSCrazy Eights

    Points: 8369

    The precision of a REAL is always 7.

    FLOAT(24) is a synonym for REAL. Don't understand why this is not explicitely mentioned in the MSDN article.

    Probably Hugo can explain us why the maximum precision is 7 for 4 byte storage. 😀

  • Lokesh Vij

    SSChampion

    Points: 10836

    Nice question! Thanks 🙂

    ~ Lokesh Vij


    Guidelines for quicker answers on T-SQL question[/url]
    Guidelines for answers on Performance questions

    Link to my Blog Post --> www.SQLPathy.com[/url]

    Follow me @Twitter

  • Danny Ocean

    SSCertifiable

    Points: 6098

    Good question with real fun 😀

    Thanks
    Vinay Kumar
    -----------------------------------------------------------------
    Keep Learning - Keep Growing !!!
    www.GrowWithSql.com

  • kapil_kk

    SSC-Insane

    Points: 21316

    Danny Ocean (6/30/2013)


    Good question with real fun 😀

    +1 😛

    _______________________________________________________________
    To get quick answer follow this link:
    http://www.sqlservercentral.com/articles/Best+Practices/61537/

  • Hugo Kornelis

    SSC Guru

    Points: 64645

    Mighty (6/30/2013)


    Probably Hugo can explain us why the maximum precision is 7 for 4 byte storage. 😀

    Floating point numbers are stored in "exponential notation" - that is the only way to be able to store both very small and very large values and still maintain precision.

    Exponential notation, aka scientific notation, is easiest explained in decimal, so let's start there. If you need to do computations that involve the mass of the sun and the atomic weight of Helium, it is impractical to work with values like 1989100000000000000000000000000000 g, or 0.00000000000000000000000664648 g. So instead, we use 1.9891 x 1023 and 6.64648 x 10-24, also written as 1.9891E23 and 6.64648E-24. The number before the E (mantissa) and the number after the E (exponent) are stored separately. If you did this with decimal values and you had two positions (plus an optional +/- sign) for the exponent and four positions (plpus +/- sign) for the mantissa, then your range of representable values would be 1.000E-99 to 9.999E+99 and -1.000E-99 to -9.999E+99.

    Float works the same, but is two-based. Both the mantissa and the exponent are stored as a binary value, but the exponential calculation is also based on binary numbers. And I assume it also lowers the mantissa to be between 0.5 and 0.999999, because that makes arithmetic operations easier. So internally, a value like for instance (decimal) 27 would be represented as 0.11011E+110 - the exponent (+110) is the binary representation of 6, so the actual value represented here is 0.11011(binary) x 26(decimal) - or 11011 (binary), which is equal to 27 (decimal). (And the 0.11011 is actually stored as just 11011, so a mere five bits; the bit before the "decimal" separator is always zero and won't be stored).

    If you check Books Online (http://msdn.microsoft.com/en-us/library/ms173773.aspx), you'll see that the values float can represent range between 1.18E - 38 and 3.40E + 38 (decimal based) (and obviously the same range for negative numbers). Compare this to the powers of two, and you'll see that 1.18E - 38 corresponds to 0.5 x 2-125 and 3.40E + 38 to 1 x 2128 (why mantissa 2 here? Because that is as close to binary 1.11111... as I can get). So obvously, the exponent is represented using eight bits, one for the sign, seven for the values 0 - 128. Two values are unused -maybe to handle under- and overflow? Or to enable future support for special cases like NaN or Inf? I don't know)

    Eight bits, or a single byte, for exponent, means three bytes left for the mantissa. That allows SQL Server to store 224 or 16,777,216 different values. Enough to represent all possible decimal values with seven positions, not enough for all possible values with eight positions. Hence the maximum precision of seven.

    (By the way, I think real should be considered to have maximum precision six; you should always calculate with at least one more digit then the precision to limit the effect of rounding errors - at least, that's what has been taught to me in high school).


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Mighty

    SSCrazy Eights

    Points: 8369

    Hugo Kornelis (7/1/2013)


    Mighty (6/30/2013)


    Probably Hugo can explain us why the maximum precision is 7 for 4 byte storage. 😀

    Floating point numbers are stored in "exponential notation" - that is the only way to be able to store both very small and very large values and still maintain precision...

    Thanks Hugo,

    guess Microsoft can use copy and paste to add it to their documentation. 😉

  • Iulian -207023

    SSCertifiable

    Points: 7507

    Yes, very nice explanation.

    Thanks,

    Iulian

  • sharath.chalamgari

    SSCertifiable

    Points: 5680

    Perfect Explanation by Hugo

  • TomThomson

    SSC Guru

    Points: 104707

    Hugo Kornelis (7/1/2013)


    (By the way, I think real should be considered to have maximum precision six; you should always calculate with at least one more digit then the precision to limit the effect of rounding errors - at least, that's what has been taught to me in high school).

    Up that that point I thought your explanation was great, but here you are confusing the precision of representation with the accuracy or results. They aren't the same thing. Besides, your teachers were remiss in telling to use 1 rather than 2 or more - if you want final results accurate to N decimal places all starting values and intermediate results should be represented to a precision of at least N+2 decimal places unless the calculation to be carried out is utterly trivial. It's fine to work with N+1 places when the calculation is something rally very simple like a single multiplication, but if you are trying to compute the eigenvectors or the inverse of a 3 X 3 matrix (both calculations which are extremely common in elementary mechanics) working with N+1 places may give you horribly inaccurate results. 32bit floating point with precision 7 decimal places was intended to carry out complex calculations delivering results to a guaranteed 3 decimal places accuracy - an N+4 design rather than N+1 - and of course less complex calculations to greater guaranteed accuracy.

    The very earliest floating point hardware implementation (Zuse, 1941) was neither electronic nor part of a stored program computer and had much smaller precision (15 bits - so nominally decimal precision 3) and hence wasn't useful for any complex calculations unless a great lack of accuracy was acceptable.

    Tom

  • webrunner

    One Orange Chip

    Points: 29813

    Yes, thanks to Hugo for the detailed explanation. I agree with those who say that the Microsoft documentation could use Hugo's addition!

    I think the following queries show the switchover point where the REAL number is returned as 100.

    -- Num1 = 99.99999

    CREATE TABLE #QotD ( Num1 REAL, Num2 FLOAT );

    INSERT INTO #QotD

    VALUES ( 99.9999961, 99.99999999997 );

    SELECT *

    FROM #QotD;

    DROP TABLE #QotD;

    -- Num1 = 100

    CREATE TABLE #QotD ( Num1 REAL, Num2 FLOAT );

    INSERT INTO #QotD

    VALUES ( 99.9999962, 99.99999999997 );

    SELECT *

    FROM #QotD;

    DROP TABLE #QotD;

    Thanks again.

    - webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and says Can I join you?
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • Revenant

    SSC-Forever

    Points: 42468

    L' Eomot Inversé (7/1/2013)


    . . . The very earliest floating point hardware implementation (Zuse, 1941) was neither electronic nor part of a stored program computer and had much smaller precision (15 bits - so nominally decimal precision 3) and hence wasn't useful for any complex calculations unless a great lack of accuracy was acceptable.

    Z1 was built in 1936.

    http://en.wikipedia.org/wiki/Konrad_Zuse

  • TomThomson

    SSC Guru

    Points: 104707

    Revenant (7/1/2013)


    L' Eomot Inversé (7/1/2013)


    . . . The very earliest floating point hardware implementation (Zuse, 1941) was neither electronic nor part of a stored program computer and had much smaller precision (15 bits - so nominally decimal precision 3) and hence wasn't useful for any complex calculations unless a great lack of accuracy was acceptable.

    Z1 was built in 1936.

    http://en.wikipedia.org/wiki/Konrad_Zuse

    I should have said "commercial hardware implementation" instead of just "hardware implementation" because I was excluding Z1 (as something that didn't really work, so not an implementation) and Z2 (as something which was only a prototype for the Z3 (and a demonstration machine for the German military), not a properly engineered machine. I don't regard the Z1 as an implementation (because it didn't often work, or so I was told).

    Although construction of the Z1 started in 1936, it didn't finish until 1938 and even when finished the machining of its parts wasn't clean/accurate enough so the machine frequently went wrong (or at least so I was taught long ago). There was no working Z1 until Siemens worked with Zuse to build one that actually worked (rather than reconstructing one that worked sometimes) in the late 1980s (a valuable history project, I think).

    The Z2 (finished in 1941) was another one-off prototype, with no future - it sprawled over too a large space and was very much a string and sealing-wax piece of engineering - in effect it was the testbed in which some of the Z3s design was verified.

    The first working machine, which actually went into production, was the Z3 (also 1941), and that was the machine I meant.

    Although it wasn't what we today call an electronic stored program computer, it had a significant electrical component (mechanical relays) and did store program in its electromechanical store.

    Zuse had patented pretty well all of what is known as Von Neumann architecture a few years before Williams and Kilburn started developing a reliable non-mechanical store for a stored-program computer which in turn was a year or two before Von Neumann wrote his report. In fact floating point was probably the one area (apart, of course, from politics) where Von Neumann disagreed with Zuse - he was famously anti-floating-point.

    Tom

  • Revenant

    SSC-Forever

    Points: 42468

    L' Eomot Inversé (7/1/2013)


    Revenant (7/1/2013)


    L' Eomot Inversé (7/1/2013)


    . . . The very earliest floating point hardware implementation (Zuse, 1941) was neither electronic nor part of a stored program computer and had much smaller precision (15 bits - so nominally decimal precision 3) and hence wasn't useful for any complex calculations unless a great lack of accuracy was acceptable.

    Z1 was built in 1936.

    http://en.wikipedia.org/wiki/Konrad_Zuse

    I should have said "commercial hardware implementation" instead of just "hardware implementation" . . .

    Tom, as I said already several times, when you come to Redmond, or to the NW from Portland to Vancouver, we have to get together for drinks and I am buying.

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply