Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««23456»»

Exact and Approximate Expand / Collapse
Author
Message
Posted Wednesday, January 4, 2012 9:43 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Wednesday, May 28, 2014 9:45 PM
Points: 26, Visits: 108
SQL Kiwi (1/4/2012)
There are many quirks to SQL Server

A bad thing tm!

SQL Kiwi (1/4/2012)
The point of this QotD is very much to emphasise that using excessive precision or scale can have unintended consequences.

And has done so in spades!

SQL Kiwi (1/4/2012)
Very few real-world uses would require anything like the DECIMAL(38,20) types specified in the question.

I beg to differ - Financial Markets where big $ times %ages times year fractions - happens all the time. "Consequences" like this one would be real pain - so thanks again for raising it - fortunately it doesn't happen in the platforms I commonly use!

BTW - even 11.9.2 behaves per my prior post!
Post #1230484
Posted Wednesday, January 4, 2012 9:53 PM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Saturday, September 13, 2014 8:01 AM
Points: 11,194, Visits: 11,165
Frank,

My prior post was written before I saw your latest, so it wasn't aimed at the points you raised. I'm going to decline getting into a debate about whether DECIMAL(38,20) is common or not (though I do work at a hedge fund right now, and we use it in a couple of places, though most stuff uses lower numbers, and floats (double precision) are also common), because that discussion generally goes nowhere productive fast. BTW 11.9.2 means nothing to me, just so you know




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #1230486
Posted Thursday, January 5, 2012 12:44 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Wednesday, May 28, 2014 9:45 PM
Points: 26, Visits: 108
SQL Kiwi (1/4/2012)
Frank,

My prior post was written before I saw your latest, so it wasn't aimed at the points you raised. I'm going to decline getting into a debate about whether DECIMAL(38,20) is common or not (though I do work at a hedge fund right now, and we use it in a couple of places, though most stuff uses lower numbers, and floats (double precision) are also common), because that discussion generally goes nowhere productive fast. BTW 11.9.2 means nothing to me, just so you know


G'day Paul, No problems about the uses (or not) - I would guess in many FM situations these days computation may well be done outside the db and simply stored (thin) ... reducing the "risk" this QOTD will materialise. A recent engagement of mine saw the database sproc side was fat with lots of number crunching for numeric datatypes. Of course in risk rather than accounting float/double may well be good enough.

11.9.2 is Sybase ASE - which is the oldest version that I can lay hands on as in indicator of the common code base that became SQL Server and I was curious to see if it also (mis)behaved. Given it didn't it seems the 6 decimal minima is a Redmond specific enhancement.
Post #1230521
Posted Thursday, January 5, 2012 2:05 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Saturday, September 13, 2014 8:01 AM
Points: 11,194, Visits: 11,165
Frank Hamersley (1/5/2012)
11.9.2 is Sybase ASE - which is the oldest version that I can lay hands on as in indicator of the common code base that became SQL Server and I was curious to see if it also (mis)behaved. Given it didn't it seems the 6 decimal minima is a Redmond specific enhancement.

Interesting. The documentation looks very similar between ASE 15 and SQL Server:

http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.help.ase_15.0.blocks/html/blocks/blocks14.htm
http://msdn.microsoft.com/en-us/library/ms190476.aspx

The magic number 6 appears for division in both, but not multiplication.




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #1230545
Posted Thursday, January 5, 2012 6:59 AM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Today @ 4:37 AM
Points: 8,823, Visits: 9,382
SQL Kiwi (1/4/2012)
As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed.

Well, I'm on an XP platform so I can't test if it is fixed in Denali, but it certainly isn't fixed in SQL 2008 R2 with latest updates.
Maybe someone could try this in Denali:
use tempdb
set nocount off
go
declare @t table (A decimal(38,5) primary key, B decimal(2,1))
insert @t(A,B) values((1234567890.0*10000000000.0*10000000000*1000)+0.12345,1.0)
select A,B from @t
select A*B from @t

and see if they get this result in the messages tab of ssms:-

(1 row(s) affected)

(1 row(s) affected)
Msg 8115, Level 16, State 2, Line 4
Arithmetic overflow error converting expression to data type numeric.

which clearly shows multiplication by one generating overflow. And the similar overflow for division can be demonstrated by changing "*" in the last line of that code to "/".

You are right that it's a type inference problem, of course - if you have types where the precision and scale (in the T-SQL sense; T-SQL makes awfully inaccurate use of both those terms, of course, so this is far from the normal sense) are part of the type definition and you also allow type inference to decide the type of a computed column in a table definition so that the definition doesn't have to specify that type you will have to have a mechanism for allowing a special type inference to decide precision and scale; that becomes a problem when you allow that same special mechanism to rule when the result of the computation is being assigned to a column or variable of known type, because overflow should be determined by whether the value resulting can be represented in that known type, not on whether some rather pointless kludge to avoid explicitly defining the type of a computed column when defining the table that contains it would implicitly produce a type that would allow the value to be represented.


Tom
Post #1230709
Posted Thursday, January 5, 2012 7:55 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Saturday, September 13, 2014 8:01 AM
Points: 11,194, Visits: 11,165
L' Eomot Inversé (1/5/2012)
SQL Kiwi (1/4/2012)
As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed.

Well, I'm on an XP platform so I can't test if it is fixed in Denali, but it certainly isn't fixed in SQL 2008 R2 with latest updates.
Maybe someone could try this in Denali:

Hm, I was confusing what I thought you were describing with this bug: https://connect.microsoft.com/SQLServer/feedback/details/557523/automatic-type-assignment-causes-data-corruption, which is 'fixed' (for some value of 'fixed'). Your code example still throws the same error in the latest build of Denali I have, with TF 4199 on just in case it was an optimizer change that requires that flag. I need to look at it a bit more closely to understand fully what the script is telling me - do you know if there is an existing bug report on Connect for this?

You are right that it's a type inference problem, of course - if you have types where the precision and scale (in the T-SQL sense; T-SQL makes awfully inaccurate use of both those terms, of course, so this is far from the normal sense) are part of the type definition and you also allow type inference to decide the type of a computed column in a table definition so that the definition doesn't have to specify that type you will have to have a mechanism for allowing a special type inference to decide precision and scale; that becomes a problem when you allow that same special mechanism to rule when the result of the computation is being assigned to a column or variable of known type, because overflow should be determined by whether the value resulting can be represented in that known type, not on whether some rather pointless kludge to avoid explicitly defining the type of a computed column when defining the table that contains it would implicitly produce a type that would allow the value to be represented.

You're probably right. I should say, in case I have given a different impression previously, that the type system in SQL Server is a bit of a mess, and the fear of making improvements that 'break' backward compatibility is a sad and distressing thing. There are all sorts of things in T-SQL that ought to be redone completely, perhaps even throwing T-SQL away completely, but that's a whole different discussion, and something that doesn't look any more likely to happen than SQL Server adding some new decimal type with better semantics and more logical behaviours in general.




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #1230780
Posted Thursday, January 5, 2012 8:06 AM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 4:46 PM
Points: 5,998, Visits: 8,261
SQL Kiwi (1/5/2012)
I need to look at it a bit more closely to understand fully what the script is telling me

I didn't disect it completely, or even execute it, but based on fist glance, I think it loads a DECIMAL(38,5) with a value that uses all the 33 digits before the decimal place, then multiplies this by 1. Because SQL Server hardcodes the result to have at least 6 decimal places and at most 38 total positions, only 32 positions before the decimal remain, and that's one too short for the current value.



Hugo Kornelis, SQL Server MVP
Visit my SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis
Post #1230788
Posted Thursday, January 5, 2012 8:39 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Saturday, September 13, 2014 8:01 AM
Points: 11,194, Visits: 11,165
Hugo Kornelis (1/5/2012)
SQL Kiwi (1/5/2012)
I need to look at it a bit more closely to understand fully what the script is telling me
I didn't disect it completely, or even execute it, but based on fist glance, I think it loads a DECIMAL(38,5) with a value that uses all the 33 digits before the decimal place, then multiplies this by 1. Because SQL Server hardcodes the result to have at least 6 decimal places and at most 38 total positions, only 32 positions before the decimal remain, and that's one too short for the current value.

Ah yes, that seems to be exactly it. So it probably won't be seen as a bug @ MSFT (just another counter-intuitive example to add to the list). No doubt people on this thread have strong views to the contrary! Thanks for the analysis.




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #1230820
Posted Thursday, January 5, 2012 11:50 AM


SSCrazy Eights

SSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy EightsSSCrazy Eights

Group: General Forum Members
Last Login: Today @ 4:37 AM
Points: 8,823, Visits: 9,382
SQL Kiwi (1/5/2012)
L' Eomot Inversé (1/5/2012)
SQL Kiwi (1/4/2012)
As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed.

Well, I'm on an XP platform so I can't test if it is fixed in Denali, but it certainly isn't fixed in SQL 2008 R2 with latest updates.
Maybe someone could try this in Denali:

Hm, I was confusing what I thought you were describing with this bug: https://connect.microsoft.com/SQLServer/feedback/details/557523/automatic-type-assignment-causes-data-corruption, which is 'fixed' (for some value of 'fixed'). Your code example still throws the same error in the latest build of Denali I have, with TF 4199 on just in case it was an optimizer change that requires that flag. I need to look at it a bit more closely to understand fully what the script is telling me - do you know if there is an existing bug report on Connect for this?

As far as I know there is no Connect bug report, and I don't think there's much point in raising one as it's clearly "by design" - the two numbers being multiplied (or divided, or remaindered) are type decimal (38,5) and decimal(2,1) so the result has to be decimal (38,6) into which the decimal(38,5) number will not fit. Perhaps I should have given a simpler example of teh problem, so here goes:
declare @t table (A decimal(38,5) primary key)
insert @t(A) values((1234567890.0*10000000000.0*10000000000*1000)+0.12345)
select A from @t
update @t set A=A*1.0

There are all sorts of things in T-SQL that ought to be redone completely, perhaps even throwing T-SQL away completely, but that's a whole different discussion, and something that doesn't look any more likely to happen than SQL Server adding some new decimal type with better semantics and more logical behaviours in general.

I'm tempted to raise a connect item asking for support for the 2008 revision of the IEE floating point standard, including exponents to base 10 and 128 bit forms. That seems the obvious way to go in the long term. I looked a while back to see if there was a connect item for that and didn't spot one. An improved decimal type might be more likely to see a positive response in the short to medium term, but isn't as good a solution.


Tom
Post #1231037
Posted Thursday, January 5, 2012 12:00 PM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Saturday, September 13, 2014 8:01 AM
Points: 11,194, Visits: 11,165
L' Eomot Inversé (1/5/2012)
As far as I know there is no Connect bug report, and I don't think there's much point in raising one as it's clearly "by design"

Agreed.

I'm tempted to raise a connect item asking for support for the 2008 revision of the IEE floating point standard, including exponents to base 10 and 128 bit forms. That seems the obvious way to go in the long term. I looked a while back to see if there was a connect item for that and didn't spot one. An improved decimal type might be more likely to see a positive response in the short to medium term, but isn't as good a solution.

IIRC that's the same implementation used by the .NET framework. It would make a good Connect suggestion anyway. If you do enter one, link back here and I'll vote for sure.




Paul White
SQL Server MVP
SQLblog.com
@SQL_Kiwi
Post #1231045
« Prev Topic | Next Topic »

Add to briefcase «««23456»»

Permissions Expand / Collapse