Decimal/Float precision

  • Hi, I hope someone can help me out here.

    I'm doing a bit of work with latitude/longitudes and calculating the difference between them. Precision is very important during the calculations.

    I was hoping that someone may be able to clarify a few things for me.

    My understanding is that the datatype float is an approximate numeric wheras the decimal datatype is precision.

    However if i do this calculation

    DECLARE @myFloat(8), @myDecimal(17)

    SET @myFloat = deg2rad(55.164368)

    SET @myDecimal = deg2rad(55.164368)

    PRINT @myFloat

    PRINT @myDecimal

    The actual answer is 0.962799851382

    @myFloat displays 0.9628 which appears to be rounded up.

    and

    @myDecimal displays 1 which again appears to be rounded up.

    My question is if decimal is more precise why do I get the rounded up value?

    Any help would be appreciated.

    Thanks very much for your time.

    Regards,

    Kevin.

    Windows 2008 Server | SQL Server 2008

  • Decimal consists of 2 parts: precision and scale.

    You have specified a precision of 17 and scale of 0 (by default) or no decimal places (1234566890123467=

    Now, if you used decimal (19, 9) for example, you would have 1234567890.123456789

    It would have been quicker for you to look at BOL. RTFM!

  • Thank you Shawn. Much appreciated.

    Windows 2008 Server | SQL Server 2008

  • You use Float to do the calculation because most T-SQL quantitative functions are in Float but you cast the results to Decimal or Number where you can set precision and scale.  One more thing you need to know you have to use the correct LOG to get expected results.  If you use Algebraic LOG in Calculus caculations your numbers will be off.  Run a search for LOG compare in SQL Server BOL (books online).  Hope this helps.

    Kind regards,

    Gift Peddie

    Kind regards,
    Gift Peddie

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply