How to expand this variable for the query to work

  • declare @myList as varchar(20)

    set @myList='1,2,3,4,5,6,7,8,9'

    select * from monitor_hosts where hostid in (@myList)

  • if monitor_hosts is really small and you just want a quick and dirty solution.

    declare @myList as varchar(22)

    set @myList=','+'1,2,3,4,5,6,7,8,9'+',' -- enclose list in commas

    select * from monitor_hosts where @myList like ('%,'+cast(hostid as varchar(8)+'%')

    you could also build the sql statement dynamically but you have to be wary of sql injection issues. i think the best solution would be a table valued function to build the in list:

    create function dbo.fListToValues( @listText as varchar(1000) )

    returns @values table

    (

    [val] int null

    )

    as

    begin

    declare @list varchar(1001), @pos int

    set @list = @listText+ ','

    while (@list <> '')

    begin

    set @pos = charindex(',', @list)

    insert @values

    values ( cast( substring( @list, 1, @pos -1 ) as int) )

    set @list = substring( @list, @pos +1, 999)

    end

    return

    end

    using this function, your example becomes:

    declare @myList as varchar(22)

    set @myList='1,2,3,4,5,6,7,8,9'

    select * from monitor_hosts where hostid in (select val from dbo.fListToValues(@myList))

  • Thanks for your anwser

    I will use the function

    Paulo

  • Actually Antonio's first solution is going to be the better/faster one. Unless your list is enormous .. say 150+ items or more you are probably going to be better off with the "like" solution.

    Kenneth

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • how can this:

    select * from monitor_hosts where @myList like ('%,'+cast(hostid as varchar(8)+'%')

    be faster than this:

    select * from monitor_hosts where hostid in (select val from dbo.fListToValues(@myList))

    the first statement will always consider every row in [monitor_hosts] since it has no selectivity. but if there's an index on [hostid], the second statement will seek using that index. without an index, it too will consider every row so i can't see how option 1 would ever be faster. (any difference would be due to the UDF and its overhead should be negligible.)

  • If you're using the function as a table, you can simply join to it. We do it all over the place.

    Unfortunately, table valued functions don't have statistics, so if the list is really big, it can perform very badly.

    If the list is really huge, I'd suggest changing it from a comma-delimited list to XML.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Antonio,

    I don't know what to tell you. I've seen and done tests on this a couple of dozen times and the LIKE version has ALWAYS been faster.

    Kenneth

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • antonio.collins (1/6/2008)


    how can this:

    select * from monitor_hosts where @myList like ('%,'+cast(hostid as varchar(8)+'%')

    be faster than this:

    select * from monitor_hosts where hostid in (select val from dbo.fListToValues(@myList))

    the first statement will always consider every row in [monitor_hosts] since it has no selectivity. but if there's an index on [hostid], the second statement will seek using that index. without an index, it too will consider every row so i can't see how option 1 would ever be faster. (any difference would be due to the UDF and its overhead should be negligible.)

    Because fListToValues doesn't have statistics would be my guess. The optimizer treats it like it has a single row when building the execution plan.

    I just did a test with the two approaches as a very simple query like represented here. The function was 66% of the execution and the LIKE statement was 34%. The function approach included a table scan against the function and a nested loop to join a Clustered Index Scan from the data. The LIKE approach only did a clustered index scan.

    This is worth testing against much more complicated queries.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I just tested it on some larger procs. It really is better. Hmmm I'll have to document this one in our best practices doc.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Ok this is just an educated guess and I have absolutly no proof for it but maybe the DB engine will use the index on a field that not substantialy changed. Ie Since you are only adding constants on either end of the indexed value the index should still apply.

    I just ran a fairly simple query using the like and got 6% compute scalar (I assume adding the ,s to the id values) and 94% clustered index scan.

    Another possiblity is the fact that the function has to write to tempdb so some IO is involved that isn't involved with the LIKE option.

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • Two points:

    1. Splitting a list in a While loop is more expensive than splitting one using a Numbers table. (See http://www.simple-talk.com/sql/t-sql-programming/the-helper-table-workbench/ for a lot of good data on this subject.)

    2. The "Like" solution fails the moment you start using multi-digit IDs. For example, try it out with "12,13,175", and you'll get IDs 1, 2, 3, 7, 5, 12, 13, 175. Becase "%1%" is like "12", and like "13" and like "175".

    The function I use for parsing is:

    ALTER function [dbo].[StringParser]

    (@String_in varchar(max),

    @Delimiter_in char(1))

    returns table

    as

    return(

    SELECT top 100 percent

    SUBSTRING(@String_in+@Delimiter_in, number,

    CHARINDEX(@Delimiter_in, @String_in+@Delimiter_in, number) - number) as Parsed

    FROM dbo.numbers

    WHERE number <= LEN(@String_in)

    AND SUBSTRING(@Delimiter_in + @String_in, number, 1) = @Delimiter_in

    ORDER BY number

    )

    (I have this in my Common database, along with a Numbers table that goes from 0 to 9999.)

    I picked a medium sized table, dbo.Names, with 149,000 rows of data, and ran:

    select *

    from dbo.names

    where '12,13,175' like '%' + cast(nameid as varchar(10)) + '%'

    select *

    from dbo.names

    inner join common.dbo.stringparser('12,13,175', ',')

    on names.nameid = stringparser.parsed

    The first one returned values I didn't really want (IDs 1, 2, 3) as well as the IDs I want, and had an execution cost of .838. The second one returned the exact values I want, and an execution cost of .799.

    The cost is only slightly different, but the first one included a Clustered Index Scan, while the second had a Clustered Index Seek. On a larger table, the difference in cost would matter more.

    So the difference in cost is an illusion, caused by using a While loop to take apart the string instead of using a more efficient solution, and it can result in wrong data.

    I'd take a very close look at this before including it in "best practices".

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Very good points. I was just going to start looking up similar solutions in Itzik's books when I saw your post. I figured there was a way to bring a table of numbers into this somehow.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Actually there is a flaw in your logic. I agree if I am compairing '1' to '2,3,12' I'm going to get a false hit. However I'm compairing ',1,' to ',2,3,12,'. Thats why you add the extra comma at the beginning and the end. Then you add a comma at the beginning and end of your value. That eliminates your false hits.

    Also while using a number table may help in this case the other method works on any delimited list. Not just a list of numbers.

    For example ',John,Joe,Jim,' LIKE '%,'+First_Name+',%'

    I will certainly be running some tests using the number table. I'll be interested to see how it compares out when you have 100 or more items in the list and are returning back several thousand rows.

    Kenneth FisherI was once offered a wizards hat but it got in the way of my dunce cap.--------------------------------------------------------------------------------For better, quicker answers on T-SQL questions, click on the following... http://www.sqlservercentral.com/articles/Best+Practices/61537/[/url]For better answers on performance questions, click on the following... http://www.sqlservercentral.com/articles/SQLServerCentral/66909/[/url]Link to my Blog Post --> www.SQLStudies.com[/url]

  • I think you're assuming that a simpler execution plan is faster. I just ran this on a load table which as 126K records. seq is a unique key.

    select seq from [qhd]

    where ',22,99,1031,2050,99999,84000,9900,12333,2,' like '%,'+ cast(seq as varchar(8))+ ',%'

    select seq from [qhd]

    where seq in (select val from dbo.fListToValues( "22,99,1031,2050,99999,84000,9900,12333,2"))

    yes the execution plan for the second query is more complex, but if you look at it closely, the data is retreived using an index seek of [qhd]. the first plan uses an index scan so at least 99999 items are considered.

    here are the timings:

    1> set statistics time on

    2> go

    1> select seq from [qhd]

    2> where ',22,99,1031,2050,99999,84000,9900,12333,2,' like '%,'+ cast(seq as var

    char(8))+ ',%'

    5> select seq from [qhd]

    6> where seq in (select val from dbo.fListToValues( "22,99,1031,2050,99999,84000

    ,9900,12333,2"))

    8> go

    SQL Server parse and compile time:

    CPU time = 0 ms, elapsed time = 1 ms.

    seq

    -----------

    2

    22

    99

    1031

    2050

    9900

    12333

    84000

    99999

    (9 rows affected)

    SQL Server Execution Times:

    CPU time = 1422 ms, elapsed time = 1422 ms.

    seq

    -----------

    2

    22

    99

    1031

    2050

    9900

    12333

    84000

    99999

    (9 rows affected)

    SQL Server Execution Times:

    CPU time = 0 ms, elapsed time = 2 ms.

    1422ms for the like vs 2ms for the UDF. For small tables, the overhead of the function may cause option #2 to be slower but that difference will be negligible since both queries will run near instantaneously. However, once you progress to decent sized tables (10000 entries or more), option #2 is significantly faster. Also, the UDF can be re-used for any integer key and enhanced to support string keys. And UDF makes the SQL clearer and easier to understand. So, in my book option #2 is far superior.

  • Kenneth Fisher (1/7/2008)


    Actually there is a flaw in your logic. I agree if I am compairing '1' to '2,3,12' I'm going to get a false hit. However I'm compairing ',1,' to ',2,3,12,'. Thats why you add the extra comma at the beginning and the end. Then you add a comma at the beginning and end of your value. That eliminates your false hits.

    Also while using a number table may help in this case the other method works on any delimited list. Not just a list of numbers.

    For example ',John,Joe,Jim,' LIKE '%,'+First_Name+',%'

    I will certainly be running some tests using the number table. I'll be interested to see how it compares out when you have 100 or more items in the list and are returning back several thousand rows.

    My StringParser function will accept any string with any single-character delimiter. It works on numbers, names, street addresses, whatever. I've run it with up to 3,000 values and it works quite well. (For long lists, it needs more numbers. My production servers have a Numbers table in a Common database with values from 0 to 100-million.)

    Adding commas at the beginning and end of the like statement will eliminate false results. But it doesn't help with the cost and the speed. That's why I mentioned that the "Like" solution requires an index scan, while my join solution uses an index seek. Another post in this thread had the same results I did.

    The "Like" solution is only better if your string parsing UDF is built poorly, using a cursor or a while loop. If it uses a simple numbers table, the "Like" solution is slower and more expensive.

    Again, take a look at the data from Robyn Page and Phil Factor that I referenced in my first post. It will make it much more clear. (They're better writers and have more data than I.)

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

Viewing 15 posts - 1 through 15 (of 36 total)

You must be logged in to reply to this topic. Login to reply