translation to hours

  • Sergiy - Tuesday, May 9, 2017 10:02 PM

    DesNorton - Tuesday, May 9, 2017 1:04 PM

    I have had issues in the past with converting from int to float to decimal.  This was a case of 
    Select an INT value from a table, pass it to a Proc as a parameter with FLOAT data type, which inserts into another table with a DECIMAL(18,4) data type.
    This resulted in intermittent cases of values less than the original INT (eg: 10 would become 9.9999).

    For this reason, I no longer use FLOAT in any of my code.

    The issue is not with FLOAT but with implicit conversions with no piece of mind.
    And did not occur to you that the problem may be not with insufficient precision of FLOAT representation of INT, but with insufficient precision of DECIMAL representation of FLOAT?

    Actual issue with precision appear when you do
    declare @I int
    set @I = 10
    select convert(decimal(19,4), @I) / 3.0 * 3.0 DecCalculation, convert(float, @I) / 3.0 * 3.0 FloatCalculation

    DecCalculation    FloatCalculation
        9.99999990    10

    Now tell me how decimal data type provides better precision.

    Your example illustrates the precision of floating point arithmetic.  I accept that as a proof of the accuracy for calculations.

    That said, in my case there was no calculation, but rather a simple casting from one data type to the next.  The issue was intermittent, and I was never able to consistently reproduce it.  At the time, we changed the data type of the parameter from float to decimal, and that resolved our issue.  Due to that experience, I will still continue to avoid using float.  However, I will refrain from advising others to avoid it.

  • Your case only indicates that DECIMAL data type does not have sufficient precision to correctly represent FLOAT values.

    Yes, a number loses precision when converted from FLOAT to DECIMAL.

    Considering that fact you need always use rounding to correct number of digits instead of simply CASTing.

    As for OP's case - there is a division in the statement.

    Exactly the case when DECIMAL should be never used.

    And it actually would not be used.

    Division is a floating point operation, so if you use DECIMAL data type for operands they will be implicitly converted to FLOAT behind the scenes.

    Implicit conversion means almost certain loss of precision.

    If you assign the result to DECIMAL value, it's another loss of precision.

    If you have several consequitive calculations you'll multiply the losses.

    So, if you care about accuracy of your calculations, always use FLOAT when multiplications/divisions are present in a formula.

    _____________
    Code for TallyGenerator

Viewing 2 posts - 16 through 16 (of 16 total)

You must be logged in to reply to this topic. Login to reply