Loss of precision when multiplying

  • I am currently working with a database that has all DECIMALtypes set to a precision and scale of 38 and 18. Though I am fully aware that there might be a loss of precision when multiplying two numbers, SQL Server,when multiplying, truncates the numbers before the operation, resulting a bigger loss of precision.

    For example,

    SELECT 1000000000000.123456 * 10000.123456

    returns «10000123456001234.575241383936», which is exact while

    SELECT CONVERT(DECIMAL(38, 18), 1000000000000.123456) *CONVERT(DECIMAL(38, 18), 10000.123456)SELECT CONVERT(DECIMAL(38, 18), 1000000000000.123456) *CONVERT(DECIMAL(38, 18), 10000.123456)

    returns the truncated value «10000123456001234.575241»

    When working in CLR, the SqlDecimal class has methods(ConvertToPrecScale and AjustScale) that let you change the precision or scale of a decimal value. The problem is that I just basically want to remove leadingand trailing zeros while the two aforementioned methods want to know an exact position.

    I came up with the following function:


            internal static SqlDecimal PackPrecScale(SqlDecimal d)
            {
                //Remove leading zeros
                var s = d.ToString();
                var l = s.Length;
                var indexofperiod = s.IndexOf('.');

                //Remove trailing zeros
                if (indexofperiod != -1)
                {
                    while (s[l - 1] == '0') l--;
                    if (s[l - 1] == '.')
                    {
                        l--;
                        indexofperiod = -1;
                    }
                }

                var precision = 6;
                var scale = 0;
                if (l > 0)
                {
                    precision = l;
                    if (s[0] == '-') precision--;
                    if (indexofperiod != -1)
                    {
                        precision--;
                        scale = l - indexofperiod - 1;
                    }
                    if (precision < 6) precision = 6;
                }

                return SqlDecimal.ConvertToPrecScale(d, precision, scale);
            }

      

    That returns a decimal having the smallest possible precision and scale. Having to multiply a and b, I can then write:

    SqlDecimal.ConvertToPrecScale(SqlDecimal.Multiply(PackPrecScale(a),PackPrecScale(b)), 38, 18)

    To keep maximum precision while multiplying.
    Is there a simpler or safer way to «pack» a SqlDecimal?

  • This gives the same accuracy as your first 'exact' example:

    SELECT CAST(1000000000000.123456 AS DECIMAL(29,12)) * CAST(10000.123456 AS DECIMAL(29,12))

    This code helped me determine the CASTs to use:

    DECLARE @numval SQL_VARIANT = 10000123456001234.575241383936;

    SELECT SQL_VARIANT_PROPERTY(@numval, 'BaseType');

    SELECT SQL_VARIANT_PROPERTY(@numval, 'Precision');

    SELECT SQL_VARIANT_PROPERTY(@numval, 'Scale');

    If you haven't even tried to resolve your issue, please don't expect the hard-working volunteers here to waste their time providing links to answers which you could easily have found yourself.

  • I was expecting this to be a case of the OP was using float when I first opened the topic; was nice to see that wasn't the case.

    To cover why this happens, however, this is actually explained in the documenation: Precision, scale, and Length (Transact-SQL). Specifically it notes:

    In multiplication and division operations we need precision - scale places to store the integral part of the result. The scale might be reduced using the following rules:

    1. The resulting scale is reduced to min(scale, 38 – (precision-scale)) if the integral part is less than 32, because it cannot be greater than 38 – (precision-scale). Result might be rounded in this case.
    2. The scale will not be changed if it is less than 6 and if the integral part is greater than 32. In this case, overflow error might be raised if it cannot fit into decimal(38, scale)
    3. The scale will be set to 6 if it is greater than 6 and if the integral part is greater than 32. In this case, both integral part and scale would be reduced and resulting type is decimal(38,6). Result might be rounded to 6 decimal places or overflow error will be thrown if integral part cannot fit into 32 digits.

    I've bolded the relevant part.

    As your scale is greater than 32 (38) and your precision is higher than 6 (12), when you use a multiplication the precision is automatically set to 6, and why you only get .~575241 (to 6 decimal places). A reason why you should always make sure you use relevant sizes for your datatypes.

    In this case, declaring a decimal as a decimal(38,12) when you're only using a scale of 19, and a precision of 6 means you're not using a relevant size. In fact 38 is double to scale of 19, as is 12 of 6, so in a way, the size is quadruple what it should be (yes, i realise that technically it's only double in size, but double the size in both scale and precision kind of makes it "quadrupally" bad 😉 ).

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Thom A - Friday, June 29, 2018 9:59 AM

    I was expecting this to be a case of the OP was using float when I first opened the topic; was nice to see that wasn't the case.

    To cover why this happens, however, this is actually explained in the documenation: Precision, scale, and Length (Transact-SQL). Specifically it notes:

    In multiplication and division operations we need precision - scale places to store the integral part of the result. The scale might be reduced using the following rules:

    1. The resulting scale is reduced to min(scale, 38 – (precision-scale)) if the integral part is less than 32, because it cannot be greater than 38 – (precision-scale). Result might be rounded in this case.
    2. The scale will not be changed if it is less than 6 and if the integral part is greater than 32. In this case, overflow error might be raised if it cannot fit into decimal(38, scale)
    3. The scale will be set to 6 if it is greater than 6 and if the integral part is greater than 32. In this case, both integral part and scale would be reduced and resulting type is decimal(38,6). Result might be rounded to 6 decimal places or overflow error will be thrown if integral part cannot fit into 32 digits.

    I've bolded the relevant part.

    As your scale is greater than 32 (38) and your precision is higher than 6 (12), when you use a multiplication the precision is automatically set to 6, and why you only get .~575241 (to 6 decimal places). A reason why you should always make sure you use relevant sizes for your datatypes.

    In this case, declaring a decimal as a decimal(38,12) when you're only using a scale of 19, and a precision of 6 means you're not using a relevant size. In fact 38 is double to scale of 19, as is 12 of 6, so in a way, the size is quadruple what it should be (yes, i realise that technically it's only double in size, but double the size in both scale and precision kind of makes it "quadrupally" bad 😉 ).

    Yes, this is absolutely correct. It is best to use the smallest Precision and Scale for each operand. So, even going with the size of the resulting expression (as Phil did above) is dangerous and could lead to unnecessary truncation.

    Ideally you would be able to do something like this:

    SELECT CONVERT(DECIMAL(19, 6), 1000000000000.123456) * CONVERT(DECIMAL(11, 6), 10000.123456)
    -- 10000123456001234.575241383936

    But when storing decimal values, yeah, it kinda makes sense that people use the largest possible container if it is uncertain what the max size of the values to store is up front.

    I don't know of a built-in way to shrink the Precision and Scale down to the minimum required sizes, but I can see some improvements to make in the O.P.s code. It shouldn't be necessary to do either loop, or to test for a leading minus / negative sign. Try this:

    // convert to string and remove potential negative sign
    string _InputString = SqlDecimal.Abs(_Input).ToString();

    // remove trailing 0's (only on the right side)
    _InputString = _InputString.TrimEnd(new char[] { '0' });

    int _Precision = _InputString.Length - 1;

    // if decimal is usually closer to the right side, possibly save time by starting there,
    // else use IndexOf
    int _Scale = (_Precision - _InputString.LastIndexOf('.'));

    return SqlDecimal.ConvertToPrecScale(_Input, _Precision, _Scale);

    Take care,
    Solomon...

    SQL#https://SQLsharp.com/ ( SQLCLR library ofover 340 Functions and Procedures)
    Sql Quantum Lifthttps://SqlQuantumLift.com/ ( company )
    Sql Quantum Leaphttps://SqlQuantumLeap.com/ ( blog )
    Info sitesCollations     •     Module Signing     •     SQLCLR

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply