Casting

  • Comments posted to this topic are about the item Casting

  • Nice little question, which could easily trip one up!! Thank you.

    ...

  • This was removed by the editor as SPAM

  • Great question, thanks.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • I must be missing the explanation here?

    This page https://msdn.microsoft.com/en-us/library/ms191530.aspx says that implicit conversion from char, varchar, nchar, nvarchar is supported to decimal, numeric, float, real, bigint, int, smallint, etc.

    But when I test it only works if the string is exactly one digit long, which implies to me that the Microsoft documentation is a pack of lies.

    Can somebody please explain why doesn't it work when the number rounds up beyond a single digit before the decimal place?

    So '9.9' works correctly, '9.9499999999999' works but is rounded to -9.900000 while 9.95 fails with numeric overflow?

    For anybody interested in other database systems - DB2 has no problem returning the "correct" answer for all scenarios that I tested, but Oracle while correct for this case in fact rounds to 8 digits - also weird behaviour, which is unacceptable if doing accounting but at least it doesn't fail.

  • matthew.flower (10/26/2015)


    I must be missing the explanation here?

    The explanation tells you where the problem is, but not what it is.

    [font="Courier New"]select '19.4615381' * -1.0;[/font] uses constant expressions, for which SQL Server has to infer the data type. For '19.4615381', the infered data type is varchar(10); for -1.0 it is numeric(2,1).

    To determine the data type of the result, two BOL pages are important:

    * Data Type Precedence (https://msdn.microsoft.com/en-us/library/ms190309.aspx) - specifies which data type gets converted when incompatible data types collide.

    * Precision, Scale and Length (https://msdn.microsoft.com/en-us/library/ms190476.aspx) - specifies resulting data type based on operation and input data types

    In this case, we combine varchar(10) with decimal(2,1), so the rules of data type precedence state that the varchar(10) has to be converted to decimal(2,1). But that data type only allows values in the range between -9.9 to 9.9, so the value 19.4615381 causes an error.

    It gets even more interesting if we make a small change: [font="Courier New"]select '19.4615381' * -10.0;[/font] will not throw a run-time error; it returns -195.00. Here the inferred data type for -10.0 is numeric(3,1), which allows -99.9 to 99.9; 19.4615381 falls in that range but loses precision, so it is rounded and converted to 19.5. This is then multiplied by -10.0; the rules in the second link state that in this case the result of the multiplication is numeric(7,2) so no further loss of precision will occur.

    A very simple and quick way to find out what data types SQL Server infers from constants or uses for the result of any expression is to drop the values in a table and then run sp_help, as shown here:

    SELECT '19.4615381' AS a,

    -1.0 AS b,

    '19.4615381' * -1.0 AS c

    INTO dbo.x;

    EXEC sp_help dbo.x;

    DROP TABLE dbo.x;


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hugo Kornelis (10/26/2015)


    matthew.flower (10/26/2015)


    I must be missing the explanation here?

    The explanation tells you where the problem is, but not what it is.

    [font="Courier New"]select '19.4615381' * -1.0;[/font] uses constant expressions, for which SQL Server has to infer the data type. For '19.4615381', the infered data type is varchar(10); for -1.0 it is numeric(2,1).

    To determine the data type of the result, two BOL pages are important:

    * Data Type Precedence (https://msdn.microsoft.com/en-us/library/ms190309.aspx) - specifies which data type gets converted when incompatible data types collide.

    * Precision, Scale and Length (https://msdn.microsoft.com/en-us/library/ms190476.aspx) - specifies resulting data type based on operation and input data types

    In this case, we combine varchar(10) with decimal(2,1), so the rules of data type precedence state that the varchar(10) has to be converted to decimal(2,1). But that data type only allows values in the range between -9.9 to 9.9, so the value 19.4615381 causes an error.

    It gets even more interesting if we make a small change: [font="Courier New"]select '19.4615381' * -10.0;[/font] will not throw a run-time error; it returns -195.00. Here the inferred data type for -10.0 is numeric(3,1), which allows -99.9 to 99.9; 19.4615381 falls in that range but loses precision, so it is rounded and converted to 19.5. This is then multiplied by -10.0; the rules in the second link state that in this case the result of the multiplication is numeric(7,2) so no further loss of precision will occur.

    A very simple and quick way to find out what data types SQL Server infers from constants or uses for the result of any expression is to drop the values in a table and then run sp_help, as shown here:

    SELECT '19.4615381' AS a,

    -1.0 AS b,

    '19.4615381' * -1.0 AS c

    INTO dbo.x;

    EXEC sp_help dbo.x;

    DROP TABLE dbo.x;

    Hugo, as always, your explanations are comprehensive and informative: thank you for the expanded write-up about implicit type conversions. The (to me) uncertain outcomes of implicit typing in SQL Server have ironically made me a better DBA over time: use explicit typing/casting. 😉

    The SELECT INTO method to determine the implicit types created by SQL Server is very useful! Two minor notes to anyone else running this: (1) the SELECT..INTO will throw the same error as the original question, of course, but the INSERT will successfully commit and (2) you'll need quotes around the schema/table for sp_help [EXEC sp_help 'dbo.x';]

    Rich

  • rmechaber (10/26/2015)


    The SELECT INTO method to determine the implicit types created by SQL Server is very useful! Two minor notes to anyone else running this: (1) the SELECT..INTO will throw the same error as the original question, of course, but the INSERT will successfully commit and (2) you'll need quotes around the schema/table for sp_help [EXEC sp_help 'dbo.x';]

    Thanks for that useful addition, Rich!

    I meant to add that the error does not prevent the table from being created. And I totally missed the quotes. I normally always run any code before testing. In this case, I had - but I had been sloppy and not included the schema. So before posting, I added the schema, not realizing that without the schema, the quotes are not needed (thanks to the wonders of implicit type conversion), but as soon as a dot appears in the name they are. My bad, and thanks for putting me straight!


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • try select cast('19.4615381' * -1.0 *1.00000 as numeric(18,6))

    --------------------------------------
    ;-)“Everything has beauty, but not everyone sees it.” ― Confucius

  • chgn01 (10/26/2015)


    try select cast('19.4615381' * -1.0 *1.00000 as numeric(18,6))

    Well that's nice. As noted above, implicit conversions can really foul you up.

    Don Simpson



    I'm not sure about Heisenberg.

  • Really good question and something to watch out for.

  • Don, you are right, we should try only use explicit conversion for better performance.

    --------------------------------------
    ;-)“Everything has beauty, but not everyone sees it.” ― Confucius

  • Nice question - thanks!

    - webrunner

    -------------------
    A SQL query walks into a bar and sees two tables. He walks up to them and asks, "Can I join you?"
    Ref.: http://tkyte.blogspot.com/2009/02/sql-joke.html

  • Hugo Kornelis (10/26/2015)


    matthew.flower (10/26/2015)


    I must be missing the explanation here?

    The explanation tells you where the problem is, but not what it is.

    [font="Courier New"]select '19.4615381' * -1.0;[/font] uses constant expressions, for which SQL Server has to infer the data type. For '19.4615381', the infered data type is varchar(10); for -1.0 it is numeric(2,1).

    To determine the data type of the result, two BOL pages are important:

    * Data Type Precedence (https://msdn.microsoft.com/en-us/library/ms190309.aspx) - specifies which data type gets converted when incompatible data types collide.

    * Precision, Scale and Length (https://msdn.microsoft.com/en-us/library/ms190476.aspx) - specifies resulting data type based on operation and input data types

    In this case, we combine varchar(10) with decimal(2,1), so the rules of data type precedence state that the varchar(10) has to be converted to decimal(2,1). But that data type only allows values in the range between -9.9 to 9.9, so the value 19.4615381 causes an error.

    It gets even more interesting if we make a small change: [font="Courier New"]select '19.4615381' * -10.0;[/font] will not throw a run-time error; it returns -195.00. Here the inferred data type for -10.0 is numeric(3,1), which allows -99.9 to 99.9; 19.4615381 falls in that range but loses precision, so it is rounded and converted to 19.5. This is then multiplied by -10.0; the rules in the second link state that in this case the result of the multiplication is numeric(7,2) so no further loss of precision will occur.

    A very simple and quick way to find out what data types SQL Server infers from constants or uses for the result of any expression is to drop the values in a table and then run sp_help, as shown here:

    SELECT '19.4615381' AS a,

    -1.0 AS b,

    '19.4615381' * -1.0 AS c

    INTO dbo.x;

    EXEC sp_help dbo.x;

    DROP TABLE dbo.x;

    Thanks. Very nice explanation.

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply