• djackson 22568 (7/7/2016)


    I am not sure of the background education of everyone here, but my guess is that there are at least a few people who are unaware of the issue with doing math with a processor.

    While processors absolutely are accurate when it comes to binary, they are not accurate when it comes to floating point calculations. One of my professors really enjoyed showing us how there is a margin of error. If you understand the conversion between binary and base 10, it becomes obvious why this is true. It has been too long for me to give an educated explanation of the reasoning behind it, but I am sure a Google search would provide insights.

    The current (since August 1987) floating point standard (IEEE 754) provides for numbers where the exponent indicates a power of 10 s well as where it indicates a power of 2, which eliminates the errors caused by conversion (because it eliminates the representation deficiencies which cause those errors). If you have hardware that supports that (and software that can use that hardware) the errors resulting from conversion disappear. There will of course still be rounding when it's needed, much as there is in arithmetic using the decimal/numeric fixed point type supported by SQL. It is impossible to represent numbers like 1/3 in decimal or binary, whether floating point or fixed point, indeed a proper rational notation is needed (either a numerator denominator pair, or a floating point format with an extra field indicating the base used) and algebraic numbers and transcendental numbers create some additional interesting problems.

    The important thing to note is that it is absolutely possible for a computer to perform calculations and come up with a wrong answer.

    Another important thing to remember is that it is equally possible for people to give the computer the wrong instructions for the task at hand. That is probably a bigger problem than the computer's limitations.

    Tom