August 25, 2022 at 8:05 pm
I thought "cloud" was a polite way of saying "smoke and mirrors" where your wallet gets smoked and they use mirror to deflect your billing questions. 😀
--Jeff Moden
Change is inevitable... Change for the better is not.
August 26, 2022 at 2:13 pm
Just reiterating a question I posted on the 2019 Administration forum, on SSMS backward compatibility, does anyone have any input?
😎
The issue is that I'm trying to simplify an environment where SSMS 2008R2, 2012 and 2019 are currently used for accessing the corresponding versions of SQL Server.
September 4, 2022 at 4:38 am
Just reiterating a question I posted on the 2019 Administration forum, on SSMS backward compatibility, does anyone have any input? 😎
The issue is that I'm trying to simplify an environment where SSMS 2008R2, 2012 and 2019 are currently used for accessing the corresponding versions of SQL Server.
Sorry, Eirikur... I've got nothing for you on this issue. I don't have anything prior to 2016.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 4, 2022 at 4:41 am
I'm having an issue on 2016 and 2017... If you have 2019 or any version prior to 2016, would you run the following code, please, and let me know the version you ran it on and post the results? Thanks for the help, folks.
SELECT A = 2147483647/1000
,B = 2147483648/1000
;
--Jeff Moden
Change is inevitable... Change for the better is not.
September 4, 2022 at 6:55 am
I'm having an issue on 2016 and 2017... If you have 2019 or any version prior to 2016, would you run the following code, please, and let me know the version you ran it on and post the results? Thanks for the help, folks.
SELECT A = 2147483647/1000
,B = 2147483648/1000
;
SSMS 18.12.1
SQL 2019 (64-bit)
-- SERVERPROPERTY('ProductVersion') = 15.0.2000.5
-- SERVERPROPERTY('ProductLevel') = RTM
-- COMPAT 100, 110, 120, 130, 140, 150
A B
----------- ---------------
2147483 2147483.648000
SSMS 2014
SQL 2014 (64-bit)
-- SERVERPROPERTY('ProductVersion') = 12.0.6164.21
-- SERVERPROPERTY('ProductLevel') = SP3
-- COMPAT 100, 110, 120
A B
----------- ---------------
2147483 2147483.648000
September 4, 2022 at 10:48 am
I'm having an issue on 2016 and 2017... If you have 2019 or any version prior to 2016, would you run the following code, please, and let me know the version you ran it on and post the results? Thanks for the help, folks.
SELECT A = 2147483647/1000
,B = 2147483648/1000
;
IIRC, 2016 and 2017 don't automatically convert literal numerical values greater than (2^31) - 1 to float as the other versions do.
😎
This was one of the static code analysis tests I had to do for 2014 and earlier to 2016.
September 4, 2022 at 5:13 pm
Jeff Moden wrote:I'm having an issue on 2016 and 2017... If you have 2019 or any version prior to 2016, would you run the following code, please, and let me know the version you ran it on and post the results? Thanks for the help, folks.
SELECT A = 2147483647/1000
,B = 2147483648/1000
;SSMS 18.12.1
SQL 2019 (64-bit)
-- SERVERPROPERTY('ProductVersion') = 15.0.2000.5
-- SERVERPROPERTY('ProductLevel') = RTM
-- COMPAT 100, 110, 120, 130, 140, 150
A B
----------- ---------------
2147483 2147483.648000SSMS 2014
SQL 2014 (64-bit)
-- SERVERPROPERTY('ProductVersion') = 12.0.6164.21
-- SERVERPROPERTY('ProductLevel') = SP3
-- COMPAT 100, 110, 120
A B
----------- ---------------
2147483 2147483.648000
Thank you, good Sir. That's what I've gotten on 2008, 2016, and 2017. I was seriously interested in 2014 and 2019 and you just happen to hit those spots on the mark. I also very much appreciate the compatibility level testing you did.
It's just my opinion but that violates the rules of integer division and I've not been able to find anywhere in the SQL Documentation where it says that such a thing will happen.
https://docs.microsoft.com/en-us/sql/t-sql/data-types/constants-transact-sql
https://docs.microsoft.com/en-us/sql/t-sql/language-elements/divide-transact-sql
As a result, I opened the following "bug" with MS. I've been aware of the "problem" since 2008 (never had to use such large constants before then) but it's caused me issues in the past because it's one of those unexpected "Crud, I forgot about that nuance" moments every time.
https://feedback.azure.com/d365community/idea/153f4b95-e82b-ed11-a81b-6045bd853c94
I suspect, though, that this is one of those things where they'll simply mark it as "operates as designed" and won't even bother updating the documentation to warn people about it. With that, I've added the alternative of properly documenting the phenomenon.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 4, 2022 at 5:29 pm
Jeff Moden wrote:I'm having an issue on 2016 and 2017... If you have 2019 or any version prior to 2016, would you run the following code, please, and let me know the version you ran it on and post the results? Thanks for the help, folks.
SELECT A = 2147483647/1000
,B = 2147483648/1000
;IIRC, 2016 and 2017 don't automatically convert literal numerical values greater than (2^31) - 1 to float as the other versions do. 😎
This was one of the static code analysis tests I had to do for 2014 and earlier to 2016.
Correct... but it didn't convert to FLOAT in other versions either (thank goodness for that!)... It converts numeric literals (constants) that have a value larger than the bounds of INT to the NUMERIC() datatype with a "0" for scale. At least it's NUMERIC() and not FLOAT... that would cause some serious inaccuracies due to the limited scale of FLOAT().
I can see it using NUMERIC() if it exceeds the bounds of BIGINT but it shouldn't be implicit, IMHO. That should be one place where an explicit cast of a numeric constant should actually require an explicit conversion.
In retrospect, though, it may be that there's a user base that relies on the current implicit conversion and have suggested additions to the documentation as a reasonable alternative.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 4, 2022 at 7:50 pm
I'm having an issue on 2016 and 2017... If you have 2019 or any version prior to 2016, would you run the following code, please, and let me know the version you ran it on and post the results? Thanks for the help, folks.
SELECT A = 2147483647/1000
,B = 2147483648/1000
;
September 4, 2022 at 8:56 pm
That's a beautiful thing, Steve. I didn't know of anyone that had actually downloaded the public eval of 2022 and installed it. Thank you very much?
I was thinking that this would make a fun little QOD and wrote it up. It's still in draft mode and the site software won't show me a preview for some reason. If you want to have a look at it and let me know if it's something that you might want to use or any changes, let me know. If I need to submit for you to see it, let me know that, as well.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 5, 2022 at 1:30 am
[
Correct... but it didn't convert to FLOAT in other versions either (thank goodness for that!)... It converts numeric literals (constants) that have a value larger than the bounds of INT to the NUMERIC() datatype with a "0" for scale. At least it's NUMERIC() and not FLOAT... that would cause some serious inaccuracies due to the limited scale of FLOAT().
I can see it using NUMERIC() if it exceeds the bounds of BIGINT but it shouldn't be implicit, IMHO. That should be one place where an explicit cast of a numeric constant should actually require an explicit conversion.
And it'd better be converted to FLOAT straight away.
For the sake of precision.
Check this out:
select SQL_VARIANT_PROPERTY(2147483648, 'basetype') basetype,
SQL_VARIANT_PROPERTY(2147483648, 'precision') [precision],
SQL_VARIANT_PROPERTY(2147483648, 'scale') scale
It's NUMERIC(10,0) - precision is actually lower than for FLOAT, which is 16 53 .
But now let's try this:
select SQL_VARIANT_PROPERTY(2147483648/1000, 'basetype') basetype,
SQL_VARIANT_PROPERTY(2147483648/1000, 'precision') [precision],
SQL_VARIANT_PROPERTY(2147483648/1000, 'scale') scale
Now it's NUMERIC(16,6)
Can you guess where precision=16 is coming from?
_____________
Code for TallyGenerator
September 5, 2022 at 1:55 am
Jeff Moden wrote:[
Correct... but it didn't convert to FLOAT in other versions either (thank goodness for that!)... It converts numeric literals (constants) that have a value larger than the bounds of INT to the NUMERIC() datatype with a "0" for scale. At least it's NUMERIC() and not FLOAT... that would cause some serious inaccuracies due to the limited scale of FLOAT().
I can see it using NUMERIC() if it exceeds the bounds of BIGINT but it shouldn't be implicit, IMHO. That should be one place where an explicit cast of a numeric constant should actually require an explicit conversion.
And it'd better be converted to FLOAT straight away.
For the sake of precision.
Check this out:
select SQL_VARIANT_PROPERTY(2147483648, 'basetype') basetype,
SQL_VARIANT_PROPERTY(2147483648, 'precision') [precision],
SQL_VARIANT_PROPERTY(2147483648, 'scale') scaleIt's NUMERIC(10,0) - precision is actually lower than for FLOAT, which is
1653 .But now let's try this:
select SQL_VARIANT_PROPERTY(2147483648/1000, 'basetype') basetype,
SQL_VARIANT_PROPERTY(2147483648/1000, 'precision') [precision],
SQL_VARIANT_PROPERTY(2147483648/1000, 'scale') scaleNow it's NUMERIC(16,6)
Can you guess where precision=16 is coming from?
Yes... and it's not FLOAT. EDIT: Careful manipulation of multiple FLOATs behind the scenes could pull it off but the decimal portion would have to go through a decimal decoder. I'd need some serious proof of that, though.
--===== Largest possible number for BIGINT = 9223372036854775807
SELECT BaseType = SQL_VARIANT_PROPERTY(9223372036854775807/1000,'BASETYPE')
,Precision = SQL_VARIANT_PROPERTY(9223372036854775807/1000,'PRECISION')
,Scale = SQL_VARIANT_PROPERTY(9223372036854775807/1000,'SCALE')
;
The maximum BIGINT value that I used there has a precision of 19, which exceeds what FLOAT can handle.
Float has a max precision of only 15 digits... not 53. 53 is the max number of bits.
--Jeff Moden
Change is inevitable... Change for the better is not.
September 5, 2022 at 1:14 pm
I'm having an issue on 2016 and 2017... If you have 2019 or any version prior to 2016, would you run the following code, please, and let me know the version you ran it on and post the results? Thanks for the help, folks.
SELECT A = 2147483647/1000
,B = 2147483648/1000
;
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
September 5, 2022 at 2:31 pm
It's just my opinion but that violates the rules of integer division
Why would you expect the rules of integer division to apply when the input data are not integers?
You do not specify at all, in any way, the data types. There are no variables. There is no CAST or CONVERT. So you are relying on SQL Server to infer a data type.
And somehow you even seem to expect that the infered datatype for the constant 2147483648 would be integer ... which would make the query fail because this value exceeds the integer range. Obviously the internal logic to infer data type from a constant is smarter than that.
For constant values up to 2147483647 the inferred data type is int. For 2147483648 and up, it is decimal(9,0). And for decimal, obviously, the rules of integer division do not apply.
SELECT 2147483647 AS a, 2147483648 AS b
INTO TestTableInfer;
GO
EXEC sp_help TestTableInfer;
GO
September 5, 2022 at 5:10 pm
This is interesting:
DECLARE @X NVARCHAR(MAX) = N'DECLARE @SQLSTR NVARCHAR(MAX)
;with xdata as
(
SELECT
A = 2147483647/1000
,B = 2147483648/1000
,C = 2147483647.0/1000
,D = 2147483648.0/1000
)
SELECT * FROM xdata;
'
SELECT *
FROM sys.dm_exec_describe_first_result_set(@X,NULL,NULL);
😎
is_hidden column_ordinal name is_nullable system_type_id system_type_name
--------- -------------- ------ ----------- -------------- -----------------
0 1 A 1 56 int
0 2 B 1 108 numeric(16,6)
0 3 C 1 108 numeric(16,6)
0 4 D 1 108 numeric(16,6)
Viewing 15 posts - 66,001 through 66,015 (of 66,815 total)
You must be logged in to reply to this topic. Login to reply