• Solomon Rutzky (8/30/2016)


    It strikes me as odd to hear that the FLOAT / REAL datatype is "precise" when by definition it is "imprecise". The following quote is taken from the MSDN page for float and real:

    Approximate-number data types for use with floating point numeric data. Floating point data is approximate; therefore, not all values in the data type range can be represented exactly.

    It's not the only rubbish you can find on MSDN.

    What's the point in all those extra "precise" digits in decimal rate if just after 5 simple arithmetic operations you're $410 off the correct result?

    I see you agreed that DECIMAL "precision" is fake.

    Well, I would say that your test is misleading as nobody does (or hopefully doesn't do) financial calculations in that manner.

    What manner would you do it?

    What you are calling "rate" is really the payment amount, which is a currency, which needs to be rounded to 2 decimal places.

    No, it's definitely a rate.

    You'll get the amount when you multiply the rate by the number of terms the payment suppose to cover.

    So if that is $410 off, then it is due to bad code, not a bad datatype.

    Can you illustrate this?

    The code appears to be quite good when FLOAT data type is used, but fails miserably in case of DECIMAL.

    Can you prove otherwise?

    _____________
    Code for TallyGenerator