Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««1234»»»

How to expand this variable for the query to work Expand / Collapse
Author
Message
Posted Monday, January 7, 2008 8:53 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Friday, June 27, 2014 12:43 PM
Points: 15,444, Visits: 9,596
Two points:

1. Splitting a list in a While loop is more expensive than splitting one using a Numbers table. (See http://www.simple-talk.com/sql/t-sql-programming/the-helper-table-workbench/ for a lot of good data on this subject.)

2. The "Like" solution fails the moment you start using multi-digit IDs. For example, try it out with "12,13,175", and you'll get IDs 1, 2, 3, 7, 5, 12, 13, 175. Becase "%1%" is like "12", and like "13" and like "175".

The function I use for parsing is:

ALTER function [dbo].[StringParser]
(@String_in varchar(max),
@Delimiter_in char(1))
returns table
as
return (
SELECT top 100 percent
SUBSTRING(@String_in+@Delimiter_in, number,
CHARINDEX(@Delimiter_in, @String_in+@Delimiter_in, number) - number) as Parsed
FROM dbo.numbers
WHERE number <= LEN(@String_in)
AND SUBSTRING(@Delimiter_in + @String_in, number, 1) = @Delimiter_in
ORDER BY number
)

(I have this in my Common database, along with a Numbers table that goes from 0 to 9999.)

I picked a medium sized table, dbo.Names, with 149,000 rows of data, and ran:

select *
from dbo.names
where '12,13,175' like '%' + cast(nameid as varchar(10)) + '%'

select *
from dbo.names
inner join common.dbo.stringparser('12,13,175', ',')
on names.nameid = stringparser.parsed

The first one returned values I didn't really want (IDs 1, 2, 3) as well as the IDs I want, and had an execution cost of .838. The second one returned the exact values I want, and an execution cost of .799.

The cost is only slightly different, but the first one included a Clustered Index Scan, while the second had a Clustered Index Seek. On a larger table, the difference in cost would matter more.

So the difference in cost is an illusion, caused by using a While loop to take apart the string instead of using a more efficient solution, and it can result in wrong data.

I'd take a very close look at this before including it in "best practices".


- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Post #439639
Posted Monday, January 7, 2008 9:15 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Yesterday @ 1:01 PM
Points: 15,558, Visits: 27,932
Very good points. I was just going to start looking up similar solutions in Itzik's books when I saw your post. I figured there was a way to bring a table of numbers into this somehow.

----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #439653
Posted Monday, January 7, 2008 9:17 AM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Yesterday @ 5:59 PM
Points: 3,462, Visits: 1,789
Actually there is a flaw in your logic. I agree if I am compairing '1' to '2,3,12' I'm going to get a false hit. However I'm compairing ',1,' to ',2,3,12,'. Thats why you add the extra comma at the beginning and the end. Then you add a comma at the beginning and end of your value. That eliminates your false hits.

Also while using a number table may help in this case the other method works on any delimited list. Not just a list of numbers.
For example ',John,Joe,Jim,' LIKE '%,'+First_Name+',%'

I will certainly be running some tests using the number table. I'll be interested to see how it compares out when you have 100 or more items in the list and are returning back several thousand rows.


Kenneth Fisher
I strive to live in a world where a chicken can cross the road without being questioned about its motives.
--------------------------------------------------------------------------------
For better, quicker answers on T-SQL questions, click on the following...
http://www.sqlservercentral.com/articles/Best+Practices/61537/
For better answers on performance questions, click on the following...
http://www.sqlservercentral.com/articles/SQLServerCentral/66909/

Link to my Blog Post --> www.SQLStudies.com
Post #439655
Posted Monday, January 7, 2008 9:20 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Friday, April 18, 2014 1:58 PM
Points: 438, Visits: 902
I think you're assuming that a simpler execution plan is faster. I just ran this on a load table which as 126K records. seq is a unique key.

select seq from [qhd]
where ',22,99,1031,2050,99999,84000,9900,12333,2,' like '%,'+ cast(seq as varchar(8))+ ',%'

select seq from [qhd]
where seq in (select val from dbo.fListToValues( "22,99,1031,2050,99999,84000,9900,12333,2"))

yes the execution plan for the second query is more complex, but if you look at it closely, the data is retreived using an index seek of [qhd]. the first plan uses an index scan so at least 99999 items are considered.

here are the timings:
1> set statistics time on
2> go
1> select seq from [qhd]
2> where ',22,99,1031,2050,99999,84000,9900,12333,2,' like '%,'+ cast(seq as var
char(8))+ ',%'
5> select seq from [qhd]
6> where seq in (select val from dbo.fListToValues( "22,99,1031,2050,99999,84000
,9900,12333,2"))
8> go
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
seq
-----------
2
22
99
1031
2050
9900
12333
84000
99999

(9 rows affected)

SQL Server Execution Times:
CPU time = 1422 ms, elapsed time = 1422 ms.
seq
-----------
2
22
99
1031
2050
9900
12333
84000
99999

(9 rows affected)

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 2 ms.

1422ms for the like vs 2ms for the UDF. For small tables, the overhead of the function may cause option #2 to be slower but that difference will be negligible since both queries will run near instantaneously. However, once you progress to decent sized tables (10000 entries or more), option #2 is significantly faster. Also, the UDF can be re-used for any integer key and enhanced to support string keys. And UDF makes the SQL clearer and easier to understand. So, in my book option #2 is far superior.
Post #439658
Posted Monday, January 7, 2008 9:34 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Friday, June 27, 2014 12:43 PM
Points: 15,444, Visits: 9,596
Kenneth Fisher (1/7/2008)
Actually there is a flaw in your logic. I agree if I am compairing '1' to '2,3,12' I'm going to get a false hit. However I'm compairing ',1,' to ',2,3,12,'. Thats why you add the extra comma at the beginning and the end. Then you add a comma at the beginning and end of your value. That eliminates your false hits.

Also while using a number table may help in this case the other method works on any delimited list. Not just a list of numbers.
For example ',John,Joe,Jim,' LIKE '%,'+First_Name+',%'

I will certainly be running some tests using the number table. I'll be interested to see how it compares out when you have 100 or more items in the list and are returning back several thousand rows.

My StringParser function will accept any string with any single-character delimiter. It works on numbers, names, street addresses, whatever. I've run it with up to 3,000 values and it works quite well. (For long lists, it needs more numbers. My production servers have a Numbers table in a Common database with values from 0 to 100-million.)

Adding commas at the beginning and end of the like statement will eliminate false results. But it doesn't help with the cost and the speed. That's why I mentioned that the "Like" solution requires an index scan, while my join solution uses an index seek. Another post in this thread had the same results I did.

The "Like" solution is only better if your string parsing UDF is built poorly, using a cursor or a while loop. If it uses a simple numbers table, the "Like" solution is slower and more expensive.

Again, take a look at the data from Robyn Page and Phil Factor that I referenced in my first post. It will make it much more clear. (They're better writers and have more data than I.)


- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread

"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
Post #439672
Posted Monday, January 7, 2008 9:47 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Friday, April 18, 2014 1:58 PM
Points: 438, Visits: 902
GSquared: Thanks for helping me out.

I think some people are missing the point. Assuming [key] is indexed, cast([key] as anything) will mean that [key]'s index will never be used thus resulting in a table/index scan.

It's just like converting a datetime to a string. dateTimeKey's index is useless for selectivity if it's converted like so:
select ... from [table] T where convert(char(8),T.dateTimeKey,12) = "080107"

So searching with '%'+ cast([key] as varchar) + '%' is bad. It works fine for small tables because even bad SQL runs acceptably on small tables.
Post #439680
Posted Monday, January 7, 2008 9:55 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Yesterday @ 1:01 PM
Points: 15,558, Visits: 27,932
You just have to keep the tests honest. I wasn't counting icons and going "Oooh, it's faster." In the simple tests, both processes had a table scan. The one that did not also have to perform a join against data missing statistics was faster.

But those were simple tests. I'm setting up something a bit more comprehensive and including a table of numbers in the test.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #439687
Posted Monday, January 7, 2008 10:01 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Friday, April 18, 2014 1:58 PM
Points: 438, Visits: 902
Grant Fritchey (1/7/2008)
You just have to keep the tests honest. I wasn't counting icons and going "Oooh, it's faster." In the simple tests, both processes had a table scan. The one that did not also have to perform a join against data missing statistics was faster.

But those were simple tests. I'm setting up something a bit more comprehensive and including a table of numbers in the test.


For the IN( UDF ) alternative, the table being queried (hosts/udf) did not have a table scan if the identifying column (hostid/seq) was indexed. The table the UDF created had a table scan but it only has a handful of rows.
Post #439689
Posted Monday, January 7, 2008 10:10 AM


SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Tuesday, April 8, 2014 6:13 AM
Points: 1,694, Visits: 19,550
There's several techniques and timings described here

http://www.sommarskog.se/arrays-in-sql-2005.html


____________________________________________________

How to get the best help on a forum

http://www.sqlservercentral.com/articles/Best+Practices/61537

Never approach a goat from the front, a horse from the rear, or a fool from any direction.
Post #439693
Posted Monday, January 7, 2008 10:42 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Yesterday @ 1:01 PM
Points: 15,558, Visits: 27,932
I did some tests. For good or for ill, I found that the XML & Table of Numbers worked better than anything else when dealing with large tables and complex queries. The function just didn't work that well. Yes, the LIKE clause always lead to table scans, so while it was sometimes faster than the function, depending on how well the function was working with the clustered index in the tables for the queries I was testing with, overall, it just didn't cut the mustard once I went to really complex queries.

----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #439703
« Prev Topic | Next Topic »

Add to briefcase ««1234»»»

Permissions Expand / Collapse