Parsing Parameters in a Stored Procedure

  • Amazing article and thanks for the procedural VS set-based analysis with examples.

    Much appreciated. 😎

    Best regards,

    Andre Guerreiro Neto

    Database Analyst
    http://www.softplan.com.br
    MCITPx1/MCTSx2/MCSE/MCSA

  • Hi,

    It seems that you are correct when we know the possible values user can input.

    But what if the values change in future or new values are added in future.

  • rajn.knit07 (8/11/2010)


    Hi,

    It seems that you are correct when we know the possible values user can input.

    But what if the values change in future or new values are added in future.

    In the samples and to simplify we create and populate the validating table on the fly.

    Ideally, the tables you will use in a real case are those of the app itself. Any changes therein immediately affect the following user entries.

  • Nice solution, but what if the values are not known? you still have to do it with a while loop.

    In fact in all the code I have written I only dealed with unknown values, not known values.

  • dml_std (8/12/2010)


    Nice solution, but what if the values are not known? you still have to do it with a while loop.

    In fact in all the code I have written I only dealed with unknown values, not known values.

    You're right. I'm in the opposite case: my parameter lists tend to be a subset of known values from a table, not free text. But you'l find non procedural methods in several former links (using a Tally table, b.e.)

  • I did an article on using a long parameter list instead of trying to write a home-made parser in T-SQL or contaminating the schema with XML and CLR code. I am still working on Part II with some examples of how the home-made parsers produce different results than the T-SQL compiler parameter parser.

    I would love to read the artilce if you could provide a link.

  • john.moreno (8/11/2010)


    Jeff Moden (8/11/2010)


    My apologies... I got out of work quite late tonight and only had time for some verbal replies. I'll try to get to the coded replies which will include some speed tests for a lot of the methods included in this thread including the one from the article. As Pappy says, "One test is worth a thousand expert opinions".

    Thanks for waiting.

    I don't know if I'm whistling in the wind, beating a dead horse or what -- but I'll say it again: Erland Sommarskog has a comprehensive coverage of this topic at: http://www.sommarskog.se/arrays-in-sql-perftest.html. Including performance, test data, and literally a DOZEN different ways of doing it.

    On the other hand, it never hurts to test something yourself...

    Yes... you're beating a dead horse and for the very reason you state... it never hurts to test something yourself. 😉

    Also, show me where in that article I can use the same test data as he. The only thing I could find was the following obscure reference...

    The data is a taken from a Slovenian version of the Unix file /usr/dict/words, used for the spell utility, that I culled off the net somewhere

    That's why I'm beating the other dead horse... at least I can find my own dead horse. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • CELKO (8/12/2010)


    jhood (8/10/2010)


    I see what you are saying Jeff, but would you want to do validation if the application did not allow the end user to send invalid data? perhaps using a drop down list..

    LOL! Why of course nobody would ever send garbage to a database! All users are perfect and the DB programmer should never have to worry about data integrity in his schema. 😉 Want to get rid of all constraints:-P while we are at it?:-P

    Heh... now that's some funny stuff there and my sentiments exactly. Well done, Joe.

    I did an article on using a long parameter list instead of trying to write a home-made parser in T-SQL or contaminating the schema with XML and CLR code. I am still working on Part II with some examples of how the home-made parsers produce different results than the T-SQL compiler parameter parser

    Someone already said it... I'd love to read that article. Would you post the URL on this thread so we can take a peek? Thanks, Joe.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • dbuendiab (8/11/2010)


    Jeff Moden (8/10/2010)


    dbuendiab (8/10/2010)


    The performance, being a question sometimes, it's not a point here:

    You know, it's really odd that you say that. It's really odd that ANYONE would say that. :blink: Check the forums on this site. It's full of posts by people asking how to improve the performance of their code. Considering how many people say that perfomance is "not a point here", I'm thinking some of them are having performance problems now. 😉

    Jeff:

    At the outset I have to tell you it's a bit intimidating (and exciting) for me to have these posts with you, when so far I had only admired and taken into account your writtings.

    I am a novice writer, but I have enough experience as a SQL Developer ... and I have never resigned myself to write inefficient code.

    One of the keys of my article was to highlight that SQL has unsuspected ways to escape the procedural programming, which is sometimes hard to see when you start with it.

    But to the point: performance is important for me, but I tend to consider it's worth working on it when needed.

    It's a pleasure to try to help especially when it comes to attitude about performance. I have to say that if you only tend to consider it's (performance) "worth when needed", then you're making a huge error in judgement IMHO because performance is always needed. I'm not talking about trying to erg another 10 milliseconds out of million row update. I'm talking about knowing and consistently applying everyday things that will prevent simple queries and stored procedures from suddenly taking 8 to 24 hours to run just because a table grew a little bit or the data requirements changed a little. A good example of such a thing is when someone justifies writing a cursor because they couldn't think of a set based way and the table will supposedly never grow any larger than some supposedly small number of rows.

    If you don't know if something will have performance and scalability and still promote the code to production, then you've not only committed a computational felony in my eyes, but you've also lost a huge learning opportunity.

    I'm also talking about attitudes towards data validation. As Celko said, you just can't trust anything to give you data correctly. And you can't blame the users for giving incorrect data or saying that they'll learn pretty quickly to do things right. It IS your job as a data professional to protect both the data and the user.

    Like I said before, this forum is filled with questions about how to improve performance. By definition, those people only "consider it's worth working on it when needed." If someone were to say such a thing on an interview with me, the interview would come to a screeching halt right then and there and I don't mean in a good way. :Whistling:

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Ok... here we go with the first phase of testing. Sorry it takes so long. It takes a bit of time to put such code together and it has to be done absolutely correctly.

    This test compares the method in the article with a "Tally CTE Splitter". I include the code to build the splitter, the code to build a 100K row 10 element CSV table, the code to build the "State" table, and the test code. The bottom line for all that testing is that when validation AND numbering of elements are added to the splitter method, it still runs about as fast as the method from the article with no such enhancements. Also, check out the profiler runs at the end. If you're one of those that wonders about such things, the number of reads on the method from the article is comparatively brutal.

    If you don't know me well enough yet, let me tell you that the details of everything that has been done is in the comments in the code. And, YES, by all means... feel free to run the code.

    Here's the code for the "Tally Splitter". This is what I use in production when the local DBA won't allow me to build a Tally Table. The comments are much longer than the actual code which is quite simple....

    CREATE FUNCTION dbo.DelimitedSplit8K

    /***************************************************************************************************

    Purpose:

    Split a given string at a given delimiter and return a list of the split elements (items).

    Returns:

    iTVF containing the following:

    ItemNumber = Element position of Item as a BIGINT (not converted to INT to eliminate a CAST)

    Item = Element value as a VARCHAR(8000)

    CROSS APPLY Usage Example:

    ---------------------------------------------------------------------------------------------------

    --===== Conditionally drop the test tables to make reruns easier for testing.

    -- (this is NOT a part of the solution)

    IF OBJECT_ID('tempdb..#JBMTest') IS NOT NULL

    DROP TABLE #JBMTest

    ;

    --===== Create and populate a test table on the fly (this is NOT a part of the solution).

    SELECT *

    INTO #JBMTest

    FROM (

    SELECT 1,'1,10,100,1000,10000,100000,1000000' UNION ALL

    SELECT 2,'2000000,200000,20000,2000,200,20,2' UNION ALL

    SELECT 3, 'This,is,a,test' UNION ALL

    SELECT 4, 'and so is this' UNION ALL

    SELECT 5, 'This, too (no pun intended)'

    ) d (SomeID,SomeValue)

    ;

    GO

    --===== Split the CSV column for the whole table using CROSS APPLY (this is the solution)

    SELECT test.SomeID, split.ItemNumber, split.Item

    FROM #JBMTest test

    CROSS APPLY

    (

    SELECT ItemNumber, Item

    FROM dbo.DelimitedSplit8k(test.SomeValue,',')

    ) split

    ;

    ---------------------------------------------------------------------------------------------------

    Notes:

    1. Optimized for VARCHAR(7999) or less. No testing or error reporting for truncation at 7999

    characters is done.

    2. Optimized for single character delimiter. Multi-character delimiters should be resolved

    externally from this function.

    3. Optimized for use with CROSS APPLY.

    4. Does not "trim" elements just in case leading or trailing blanks are intended.

    5. If you don't know how a Tally table can be used to replace loops, please see the following...

    http://www.sqlservercentral.com/articles/T-SQL/62867/

    6. Changing this function to use VARCHAR(MAX) will cause it to run twice as slow. It's just the

    nature of VARCHAR(MAX) whether it fits in-row or not.

    7. Multi-machine testing for the method of using UNPIVOT instead of 10 SELECT/UNION ALLs shows

    that the UNPIVOT method is quite machine dependent and can slow things down quite a bit.

    8. Performance testing shows using "TOP" for the limiting criteria of "N" is actually

    slower and slightly more CPU intensive than the traditional WHERE N < LEN(@pString) + 2.

    9. Performance testing shows using ORDER BY (SELECT x) where "x" is anything is actually

    slower and slightly more CPU intensive than the traditional ORDER BY (SELECT N).

    Credits:

    This code is the product of many people's efforts including but not limited to the following:

    cteTally concept originally by Iztek Ben Gan and "decimalized" by Lynn Pettis (and others) for a

    bit of extra speed and finally redacted by Jeff Moden for a different slant on readability and

    compactness. Hat's off to Paul White for his simple explanations of CROSS APPLY. Finally,

    special thanks to Erland Sommarskog for his tireless efforts to help people understand

    what you can actually do with T-SQL. I also thank whoever wrote the first article I ever saw

    on "numbers tables" which is located at the following URL ...

    http://sqlserver2000.databases.aspfaq.com/why-should-i-consider-using-an-auxiliary-numbers-table.html

    Revision History:

    Rev 00 - 20 Jan 2010 - Concept: Lynn Pettis and others.

    Redaction/Implementation: Jeff Moden

    - Base 10 redaction and reduction for CTE. (Total rewrite)

    Rev 01 - 13 Mar 2010 - Jeff Moden

    - Removed one additional concatenation and one subtraction from the SUBSTRING in the

    SELECT List for that tiny bit of extra speed.

    Rev 02 - 14 Apr 2010 - Jeff Moden

    - No code changes. Added CROSS APPLY usage example to the header, some additional credits,

    and extra documentation.

    Rev 03 - 18 Apr 2010 - Jeff Moden

    - No code changes. Added notes 7, 8, and 9 about certain "optimizations" that dont'

    actually work for this type of function.

    Rev 04 - 29 Jun 2010 - Jeff Moden

    - Added WITH SCHEMABINDING thanks to a note by Paul White. This prevents an unnecessary

    "Table Spool" when the function is used in an UPDATE statement even though the function

    makes no external references.

    ***************************************************************************************************/

    --===== Define I/O parameters

    (

    @pString VARCHAR(7999),

    @pDelimiter CHAR(1)

    )

    RETURNS TABLE

    WITH SCHEMABINDING

    AS

    RETURN

    --===== "Inline" CTE Driven "Tally Table” produces values up to

    -- 10,000... enough to cover VARCHAR(8000)

    WITH

    E1(N) AS ( --=== Create Ten 1's

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 UNION ALL

    SELECT 1 UNION ALL SELECT 1 --10

    ),

    E2(N) AS (SELECT 1 FROM E1 a, E1 b), --100

    E4(N) AS (SELECT 1 FROM E2 a, E2 b), --10,000

    cteTally(N) AS (SELECT ROW_NUMBER() OVER (ORDER BY (SELECT N)) FROM E4)

    --===== Do the split

    SELECT ROW_NUMBER() OVER (ORDER BY N) AS ItemNumber,

    SUBSTRING(@pString, N, CHARINDEX(@pDelimiter, @pString + @pDelimiter, N) - N) AS Item

    FROM cteTally

    WHERE N <= LEN(@pString)

    AND SUBSTRING(@pDelimiter + @pString, N, 1) = @pDelimiter

    ;

    GO

    Here's the code to build the "State" table and the CSV test table. Don't let the number of rows scare you... it only takes 26 seconds to fun on my 8 year old single 1.8GHz CPU desktop box.

    --=====================================================================================================================

    -- Setup the require State Lookup table (#PostalArea) and a test table that contains CSV values.

    -- As a side bar, don't ever store CSV's in a real table because it violates normalization rules.

    --=====================================================================================================================

    --===== Do this testing in a nice safe place that everyone has access to

    USE TempDB

    ;

    --===== Conditionally drop temp tables to make it easier to do reruns

    IF OBJECT_ID('TempDB..#PostalArea','U') IS NOT NULL

    DROP TABLE #PostalArea --Lookup table contains States and Territories

    ;

    IF OBJECT_ID('TempDB..#CsvTest','U') IS NOT NULL

    DROP TABLE #CsvTest --Contains the CSV's to be split

    ;

    --===== Create the Postal Area table and populate it.

    CREATE TABLE #PostalArea

    (

    Abbreviation CHAR(2) PRIMARY KEY CLUSTERED,

    LongName VARCHAR(30)

    )

    INSERT INTO #PostalArea

    (LongName, Abbreviation)

    SELECT 'ALABAMA','AL' UNION ALL

    SELECT 'ALASKA','AK' UNION ALL

    SELECT 'AMERICAN SAMOA','AS' UNION ALL

    SELECT 'ARIZONA ','AZ' UNION ALL

    SELECT 'ARKANSAS','AR' UNION ALL

    SELECT 'CALIFORNIA ','CA' UNION ALL

    SELECT 'COLORADO ','CO' UNION ALL

    SELECT 'CONNECTICUT','CT' UNION ALL

    SELECT 'DELAWARE','DE' UNION ALL

    SELECT 'DISTRICT OF COLUMBIA','DC' UNION ALL

    SELECT 'FEDERATED STATES OF MICRONESIA','FM' UNION ALL

    SELECT 'FLORIDA','FL' UNION ALL

    SELECT 'GEORGIA','GA' UNION ALL

    SELECT 'GUAM ','GU' UNION ALL

    SELECT 'HAWAII','HI' UNION ALL

    SELECT 'IDAHO','ID' UNION ALL

    SELECT 'ILLINOIS','IL' UNION ALL

    SELECT 'INDIANA','IN' UNION ALL

    SELECT 'IOWA','IA' UNION ALL

    SELECT 'KANSAS','KS' UNION ALL

    SELECT 'KENTUCKY','KY' UNION ALL

    SELECT 'LOUISIANA','LA' UNION ALL

    SELECT 'MAINE','ME' UNION ALL

    SELECT 'MARSHALL ISLANDS','MH' UNION ALL

    SELECT 'MARYLAND','MD' UNION ALL

    SELECT 'MASSACHUSETTS','MA' UNION ALL

    SELECT 'MICHIGAN','MI' UNION ALL

    SELECT 'MINNESOTA','MN' UNION ALL

    SELECT 'MISSISSIPPI','MS' UNION ALL

    SELECT 'MISSOURI','MO' UNION ALL

    SELECT 'MONTANA','MT' UNION ALL

    SELECT 'NEBRASKA','NE' UNION ALL

    SELECT 'NEVADA','NV' UNION ALL

    SELECT 'NEW HAMPSHIRE','NH' UNION ALL

    SELECT 'NEW JERSEY','NJ' UNION ALL

    SELECT 'NEW MEXICO','NM' UNION ALL

    SELECT 'NEW YORK','NY' UNION ALL

    SELECT 'NORTH CAROLINA','NC' UNION ALL

    SELECT 'NORTH DAKOTA','ND' UNION ALL

    SELECT 'NORTHERN MARIANA ISLANDS','MP' UNION ALL

    SELECT 'OHIO','OH' UNION ALL

    SELECT 'OKLAHOMA','OK' UNION ALL

    SELECT 'OREGON','OR' UNION ALL

    SELECT 'PALAU','PW' UNION ALL

    SELECT 'PENNSYLVANIA','PA' UNION ALL

    SELECT 'PUERTO RICO','PR' UNION ALL

    SELECT 'RHODE ISLAND','RI' UNION ALL

    SELECT 'SOUTH CAROLINA','SC' UNION ALL

    SELECT 'SOUTH DAKOTA','SD' UNION ALL

    SELECT 'TENNESSEE','TN' UNION ALL

    SELECT 'TEXAS','TX' UNION ALL

    SELECT 'UTAH','UT' UNION ALL

    SELECT 'VERMONT','VT' UNION ALL

    SELECT 'VIRGIN ISLANDS','VI' UNION ALL

    SELECT 'VIRGINIA ','VA' UNION ALL

    SELECT 'WASHINGTON','WA' UNION ALL

    SELECT 'WEST VIRGINIA','WV' UNION ALL

    SELECT 'WISCONSIN','WI' UNION ALL

    SELECT 'WYOMING','WY'

    ;

    --===== This creates and populates a test table on the fly containing a

    -- sequential column and a randomly generated CSV Parameter column.

    SELECT TOP (100000) --Controls the number of rows in the test table

    ISNULL(ROW_NUMBER() OVER (ORDER BY(SELECT NULL)),0) AS RowNum,

    (

    SELECT CAST(STUFF( --=== STUFF get's rid of the leading comma

    ( --=== This builds CSV row with a leading comma

    SELECT TOP (10) --Controls the number of CSV elements in each row

    ','+Abbreviation

    FROM #PostalArea pa

    WHERE ac1.Object_ID <= 2147483647 --Without this, all rows would be the same

    ORDER BY NEWID()

    FOR XML PATH('')

    )

    ,1,1,'') AS VARCHAR(8000))

    ) AS CsvParameter

    INTO #CsvTest

    FROM Master.sys.All_Columns ac1 --Classic cross join pseudo-cursor

    CROSS JOIN Master.sys.All_Columns ac2 --can produce row sets up 16 million

    ;

    --===== Wound a couple of rows with some bad data

    UPDATE #CsvTest

    SET CsvParameter = REPLACE(CsvParameter,'MH','XX')

    WHERE CHARINDEX('MH',CsvParameter) > 0

    ;

    --===== Show the first five rows from each table as a sanity check

    SELECT TOP (5) *

    FROM #PostalArea

    ;

    SELECT TOP (5) *

    FROM #CsvTest

    ;

    Here's the code being tested. Again, read the comments.

    --=====================================================================================================================

    -- And now, with profiler turned on, let's do some testing!

    --=====================================================================================================================

    GO

    --===== Conditionally drop temp tables to make it easier to do reruns

    IF OBJECT_ID('TempDB..#CsvSplit1','U') IS NOT NULL

    DROP TABLE #CsvSplit1 --Contains final split data from article method

    ;

    IF OBJECT_ID('TempDB..#CsvSplit2','U') IS NOT NULL

    DROP TABLE #CsvSplit2 --Contains final split data from CROSS APPLY splitter

    ;

    GO

    --===== Method from the article modified to do a whole table as if there were many hits on a stored procedure.

    SELECT csv.RowNum, area.Abbreviation

    INTO #CsvSplit1

    FROM #PostalArea area

    JOIN #CsvTest csv

    ON CHARINDEX(area.Abbreviation, csv.CsvParameter) > 0

    ;

    GO

    --===== The "split" method modified to do a whole table as if there were many hits on a stored procedure.

    -- Notice that this method not only produces the correct answer, but also numbers the elements for

    -- each "input row" AND validates ALL the inputs IN THE SAME TIME AS THE METHOD FROM THE ARTICLE.

    WITH

    cteValidate AS

    (

    SELECT csv.RowNum, split.ItemNumber, split.Item AS Abbreviation

    FROM #CsvTest csv

    CROSS APPLY

    (

    SELECT ItemNumber, Item

    FROM dbo.DelimitedSplit8k(csv.CsvParameter,',')

    ) split

    )

    SELECT checked.RowNum,

    checked.ItemNumber,

    checked.Abbreviation,

    3-ISNULL(LEN(area.Abbreviation),3) AS IsValid

    INTO #CsvSplit2

    FROM cteValidate checked

    LEFT OUTER JOIN #PostalArea area

    ON checked.Abbreviation = area.Abbreviation

    ;

    GO

    And here's the profiler trace for a couple of runs. Keep in mind that the method from the article isn't doing any validation or enumeration whereas the splitter method is doing both of those.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Ah... I almost forgot... if you don't really want to do any validation against the code... in other words, you just want to do the split, check out the following code...

    SELECT csv.RowNum, split.ItemNumber, split.Item AS Abbreviation

    INTO #CsvSplit3

    FROM #CsvTest csv

    CROSS APPLY

    (

    SELECT ItemNumber, Item

    FROM dbo.DelimitedSplit8k(csv.CsvParameter,',')

    ) split

    That only takes 9 seconds or so to split out the million elements. When working with splits of 8K or less, the splitter method rivals CLRs. A properly written CLR splitter will still be the Tally Splitter, but not by enough to make it worthwhile for me to have separately compiled code on my box.

    I'll go back through the thread and find one of those XML splitter solutions to test next.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Please use this way to aviod looping.

    --Please pass through Query string

    Declare @sParameter varchar(max)

    Begin

    Set @sParameter = 'select ''WA'' union all select ''AZ'' union all select ''CA'' union all select ''TS'''

    Declare @tStates table ( state char(2) )

    Insert @tStates

    exec(@sParameter)

    Select *

    From @tStates

    End

  • The limitation of the process discussed here is that we need to know the exact list of valid parameters and need them to have it in a table. Though this might be a fullproof way but it is quite limited.

    In actual scenarios we get "," separated values which are generally userinputs and that too from free flow text boxes. Ifthe user control is a dropdown/listbox the explained way can be implemented effectively.

  • This was an interesting article with an even more interesting discussion.

    The thing I'd like to pick up on, that drew my attention, was your reasons for moving away from the loop method. You commented that you found there was too much coding for so little functionality.

    The interesting thing though, is that the loop method performs as much as 5 times faster than the alternative method you provided. This becomes apparent when you consider 4000 different states.

    Now I come across this a lot. I'll look at a piece of code that a developer has produced and I'll wonder why they've adopted a certain approach - very often the answer is "it's a neater solution". My approach is, "sure, but is it a fast solution"?

    Not withstanding the already mentioned solutions that others have commented on that improve on the loop method, and not withstanding that your intention was probably never to deal with 100's or thousands of states, in my book, if a procedure has 5 lines more code but gives me performance gain by a factor of 5 then it's worth keeping those extra lines. Heck, for 5-times the performance, give me 100 extra lines of code ;).

  • Jeff Moden (8/12/2010)

    Someone already said it... I'd love to read that article. Would you post the URL on this thread so we can take a peek? Thanks, Joe.

    Maybe this is the article he was referring to:

    http://www.simple-talk.com/sql/learn-sql-server/values%28%29-and-long-parameter-lists/

Viewing 15 posts - 76 through 90 (of 124 total)

You must be logged in to reply to this topic. Login to reply