

SSC Rookie
Group: General Forum Members
Last Login: Tuesday, March 24, 2015 5:21 AM
Points: 26,
Visits: 110


SQL Kiwi (1/4/2012) There are many quirks to SQL Server
A bad thing tm!
SQL Kiwi (1/4/2012) The point of this QotD is very much to emphasise that using excessive precision or scale can have unintended consequences.
And has done so in spades!
SQL Kiwi (1/4/2012) Very few realworld uses would require anything like the DECIMAL(38,20) types specified in the question.
I beg to differ  Financial Markets where big $ times %ages times year fractions  happens all the time. "Consequences" like this one would be real pain  so thanks again for raising it  fortunately it doesn't happen in the platforms I commonly use!
BTW  even 11.9.2 behaves per my prior post!




SSCrazy Eights
Group: General Forum Members
Last Login: 2 days ago @ 6:42 AM
Points: 9,932,
Visits: 11,347


Frank,
My prior post was written before I saw your latest, so it wasn't aimed at the points you raised. I'm going to decline getting into a debate about whether DECIMAL(38,20) is common or not (though I do work at a hedge fund right now, and we use it in a couple of places, though most stuff uses lower numbers, and floats (double precision) are also common), because that discussion generally goes nowhere productive fast. BTW 11.9.2 means nothing to me, just so you know
Paul White SQLPerformance.com SQLblog.com @SQL_Kiwi




SSC Rookie
Group: General Forum Members
Last Login: Tuesday, March 24, 2015 5:21 AM
Points: 26,
Visits: 110


SQL Kiwi (1/4/2012)
Frank, My prior post was written before I saw your latest, so it wasn't aimed at the points you raised. I'm going to decline getting into a debate about whether DECIMAL(38,20) is common or not (though I do work at a hedge fund right now, and we use it in a couple of places, though most stuff uses lower numbers, and floats (double precision) are also common), because that discussion generally goes nowhere productive fast. BTW 11.9.2 means nothing to me, just so you know
G'day Paul, No problems about the uses (or not)  I would guess in many FM situations these days computation may well be done outside the db and simply stored (thin) ... reducing the "risk" this QOTD will materialise. A recent engagement of mine saw the database sproc side was fat with lots of number crunching for numeric datatypes. Of course in risk rather than accounting float/double may well be good enough.
11.9.2 is Sybase ASE  which is the oldest version that I can lay hands on as in indicator of the common code base that became SQL Server and I was curious to see if it also (mis)behaved. Given it didn't it seems the 6 decimal minima is a Redmond specific enhancement.




SSCrazy Eights
Group: General Forum Members
Last Login: 2 days ago @ 6:42 AM
Points: 9,932,
Visits: 11,347





SSCrazy Eights
Group: General Forum Members
Last Login: Today @ 2:14 PM
Points: 9,833,
Visits: 11,907


SQL Kiwi (1/4/2012) As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed. Well, I'm on an XP platform so I can't test if it is fixed in Denali, but it certainly isn't fixed in SQL 2008 R2 with latest updates. Maybe someone could try this in Denali:
use tempdb set nocount off go declare @t table (A decimal(38,5) primary key, B decimal(2,1)) insert @t(A,B) values((1234567890.0*10000000000.0*10000000000*1000)+0.12345,1.0) select A,B from @t select A*B from @t
and see if they get this result in the messages tab of ssms:
(1 row(s) affected)
(1 row(s) affected) Msg 8115, Level 16, State 2, Line 4 Arithmetic overflow error converting expression to data type numeric.
which clearly shows multiplication by one generating overflow. And the similar overflow for division can be demonstrated by changing "*" in the last line of that code to "/".
You are right that it's a type inference problem, of course  if you have types where the precision and scale (in the TSQL sense; TSQL makes awfully inaccurate use of both those terms, of course, so this is far from the normal sense) are part of the type definition and you also allow type inference to decide the type of a computed column in a table definition so that the definition doesn't have to specify that type you will have to have a mechanism for allowing a special type inference to decide precision and scale; that becomes a problem when you allow that same special mechanism to rule when the result of the computation is being assigned to a column or variable of known type, because overflow should be determined by whether the value resulting can be represented in that known type, not on whether some rather pointless kludge to avoid explicitly defining the type of a computed column when defining the table that contains it would implicitly produce a type that would allow the value to be represented.
Tom




SSCrazy Eights
Group: General Forum Members
Last Login: 2 days ago @ 6:42 AM
Points: 9,932,
Visits: 11,347


L' Eomot Inversé (1/5/2012)
SQL Kiwi (1/4/2012) As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed.Well, I'm on an XP platform so I can't test if it is fixed in Denali, but it certainly isn't fixed in SQL 2008 R2 with latest updates. Maybe someone could try this in Denali: Hm, I was confusing what I thought you were describing with this bug: https://connect.microsoft.com/SQLServer/feedback/details/557523/automatictypeassignmentcausesdatacorruption, which is 'fixed' (for some value of 'fixed'). Your code example still throws the same error in the latest build of Denali I have, with TF 4199 on just in case it was an optimizer change that requires that flag. I need to look at it a bit more closely to understand fully what the script is telling me  do you know if there is an existing bug report on Connect for this?
You are right that it's a type inference problem, of course  if you have types where the precision and scale (in the TSQL sense; TSQL makes awfully inaccurate use of both those terms, of course, so this is far from the normal sense) are part of the type definition and you also allow type inference to decide the type of a computed column in a table definition so that the definition doesn't have to specify that type you will have to have a mechanism for allowing a special type inference to decide precision and scale; that becomes a problem when you allow that same special mechanism to rule when the result of the computation is being assigned to a column or variable of known type, because overflow should be determined by whether the value resulting can be represented in that known type, not on whether some rather pointless kludge to avoid explicitly defining the type of a computed column when defining the table that contains it would implicitly produce a type that would allow the value to be represented. You're probably right. I should say, in case I have given a different impression previously, that the type system in SQL Server is a bit of a mess, and the fear of making improvements that 'break' backward compatibility is a sad and distressing thing. There are all sorts of things in TSQL that ought to be redone completely, perhaps even throwing TSQL away completely, but that's a whole different discussion, and something that doesn't look any more likely to happen than SQL Server adding some new decimal type with better semantics and more logical behaviours in general.
Paul White SQLPerformance.com SQLblog.com @SQL_Kiwi




SSCertifiable
Group: General Forum Members
Last Login: 2 days ago @ 1:48 PM
Points: 7,765,
Visits: 11,376


SQL Kiwi (1/5/2012) I need to look at it a bit more closely to understand fully what the script is telling me I didn't disect it completely, or even execute it, but based on fist glance, I think it loads a DECIMAL(38,5) with a value that uses all the 33 digits before the decimal place, then multiplies this by 1. Because SQL Server hardcodes the result to have at least 6 decimal places and at most 38 total positions, only 32 positions before the decimal remain, and that's one too short for the current value.
Hugo Kornelis, SQL Server MVP Visit my SQL Server blog: http://sqlblog.com/blogs/hugo_kornelis




SSCrazy Eights
Group: General Forum Members
Last Login: 2 days ago @ 6:42 AM
Points: 9,932,
Visits: 11,347


Hugo Kornelis (1/5/2012)
SQL Kiwi (1/5/2012) I need to look at it a bit more closely to understand fully what the script is telling meI didn't disect it completely, or even execute it, but based on fist glance, I think it loads a DECIMAL(38,5) with a value that uses all the 33 digits before the decimal place, then multiplies this by 1. Because SQL Server hardcodes the result to have at least 6 decimal places and at most 38 total positions, only 32 positions before the decimal remain, and that's one too short for the current value. Ah yes, that seems to be exactly it. So it probably won't be seen as a bug @ MSFT (just another counterintuitive example to add to the list). No doubt people on this thread have strong views to the contrary! Thanks for the analysis.
Paul White SQLPerformance.com SQLblog.com @SQL_Kiwi




SSCrazy Eights
Group: General Forum Members
Last Login: Today @ 2:14 PM
Points: 9,833,
Visits: 11,907


SQL Kiwi (1/5/2012)
L' Eomot Inversé (1/5/2012)
SQL Kiwi (1/4/2012) As far as I recall, the issue with multiplying or dividing by one was a bug in type inference, which has since been fixed.Well, I'm on an XP platform so I can't test if it is fixed in Denali, but it certainly isn't fixed in SQL 2008 R2 with latest updates. Maybe someone could try this in Denali: Hm, I was confusing what I thought you were describing with this bug: https://connect.microsoft.com/SQLServer/feedback/details/557523/automatictypeassignmentcausesdatacorruption, which is 'fixed' (for some value of 'fixed'). Your code example still throws the same error in the latest build of Denali I have, with TF 4199 on just in case it was an optimizer change that requires that flag. I need to look at it a bit more closely to understand fully what the script is telling me  do you know if there is an existing bug report on Connect for this? As far as I know there is no Connect bug report, and I don't think there's much point in raising one as it's clearly "by design"  the two numbers being multiplied (or divided, or remaindered) are type decimal (38,5) and decimal(2,1) so the result has to be decimal (38,6) into which the decimal(38,5) number will not fit. Perhaps I should have given a simpler example of teh problem, so here goes:
declare @t table (A decimal(38,5) primary key) insert @t(A) values((1234567890.0*10000000000.0*10000000000*1000)+0.12345) select A from @t update @t set A=A*1.0
There are all sorts of things in TSQL that ought to be redone completely, perhaps even throwing TSQL away completely, but that's a whole different discussion, and something that doesn't look any more likely to happen than SQL Server adding some new decimal type with better semantics and more logical behaviours in general. I'm tempted to raise a connect item asking for support for the 2008 revision of the IEE floating point standard, including exponents to base 10 and 128 bit forms. That seems the obvious way to go in the long term. I looked a while back to see if there was a connect item for that and didn't spot one. An improved decimal type might be more likely to see a positive response in the short to medium term, but isn't as good a solution.
Tom




SSCrazy Eights
Group: General Forum Members
Last Login: 2 days ago @ 6:42 AM
Points: 9,932,
Visits: 11,347


L' Eomot Inversé (1/5/2012) As far as I know there is no Connect bug report, and I don't think there's much point in raising one as it's clearly "by design" Agreed.
I'm tempted to raise a connect item asking for support for the 2008 revision of the IEE floating point standard, including exponents to base 10 and 128 bit forms. That seems the obvious way to go in the long term. I looked a while back to see if there was a connect item for that and didn't spot one. An improved decimal type might be more likely to see a positive response in the short to medium term, but isn't as good a solution. IIRC that's the same implementation used by the .NET framework. It would make a good Connect suggestion anyway. If you do enter one, link back here and I'll vote for sure.
Paul White SQLPerformance.com SQLblog.com @SQL_Kiwi



