• Explanation


    When precision is not specified with the decimal type, TSQL will use whatever precision would use the minimum space to store with the number it is converting to.

    This is not correct. SQL Server does not use "whatever precision" in this case. Here is the quote from BOL about precision (http://msdn.microsoft.com/en-us/library/ms187746.aspx):

    decimal and numeric (Transact-SQL)


    p (precision)

    The maximum total number of decimal digits that can be stored, both to the left and to the right of the decimal point. The precision must be a value from 1 through the maximum precision of 38. The default precision is 18.

    So when precision is not specified, SQL Server uses the value = 18.

    Another quote from BOL:

    s (scale)

    The maximum number of decimal digits that can be stored to the right of the decimal point. ... The default scale is 0; therefore, 0 <= s <= p.

    Thus the expression "CONVERT(DECIMAL, @var)" is equal to "CONVERT(DECIMAL(18,0), @var)".

    The batch from the example is equal to:

    DECLARE @TestDecimal DECIMAL(8, 2)

    SET @TestDecimal = 275953.00

    SELECT CONVERT( DECIMAL(18,0), @TestDecimal * 0.40 ) ,

    CONVERT( DECIMAL(18,0), @TestDecimal ) * 0.40

    Bhavesh_Patel


    Why answer differs when we are using the same function on same variable?

    In the first expression, the result of multiplication is converted to DECIMAL:

    CONVERT(DECIMAL, @TestDecimal * 0.40) = CONVERT(DECIMAL(18,0), 275953.00 * 0.40) = CONVERT (DECIMAL(18,0), 110381.2000) = 110381

    In the second expression, the multiplier is converted to DECIMAL:

    CONVERT(DECIMAL, @TestDecimal) * 0.40 = CONVERT(DECIMAL(18,0), @TestDecimal) * 0.40 = 275953 * 0.40 = 110381.20