Parsing Parameters in a Stored Procedure

  • Patibandla (8/11/2010)


    Really a very nice article to spare with. But , In case of tables containing large amounts of data , i think it's not a good practice to use CharIndex. instead you can simply use a table valued function which can return you the desired result set

    CREATE FUNCTION [dbo].[udf_GetStringToTable]-- 'JJHJ,JJJJJJJJJ' ,','

    (

    @StringData VARCHAR(MAX)

    , @Delineator VARCHAR(1)

    )

    RETURNS @ResultTable TABLE

    (

    ColumnID VARCHAR(MAX)

    )

    AS

    BEGIN

    DECLARE @ColumnID VARCHAR(MAX)

    , @Pos INT

    SET @StringData = LTRIM(RTRIM(@StringData))+ @Delineator

    SET @Pos = CHARINDEX(@Delineator, @StringData, 1)

    IF REPLACE(@StringData, @Delineator, '') <> ''

    BEGIN

    WHILE @Pos > 0

    BEGIN

    SET @ColumnID = LTRIM(RTRIM(LEFT(@StringData, @Pos - 1)))

    IF @ColumnID <> ''

    BEGIN

    INSERT INTO @ResultTable (ColumnID)

    VALUES (@ColumnID)

    END

    SET @StringData = RIGHT(@StringData, LEN(@StringData) - @Pos)

    SET @Pos = CHARINDEX(@Delineator, @StringData, 1)

    END

    END

    RETURN

    END

    I agree, but the key point of my writing was precisely how to avoid the procedural loop. Other posts show similar solutions, as I do at the beginning of the article, but I tried to remark (maybe unfortunately) the compactness of a pure SQL solution (also putting aside performance considerations when using large tables. I wait anxiously some performance test as Jeff promised).

  • dbuendiab (8/10/2010)


    Hi, Jay:

    You're right, I didn't remember OPENXML. Still I would not switch to a XML-formed parameter passing for several reasons - being the main the way users like to get their information. They feel comfortable with a comma-separed argument.

    oh, I would never ask for xml from a user. I'm not even hung up on using xml. I was just saying that xml processing is there in ss2000.

  • no one able to find the massive thread ? link please !

  • Jay Hopping (8/11/2010)


    dbuendiab (8/10/2010)


    Hi, Jay:

    You're right, I didn't remember OPENXML. Still I would not switch to a XML-formed parameter passing for several reasons - being the main the way users like to get their information. They feel comfortable with a comma-separed argument.

    oh, I would never ask for xml from a user. I'm not even hung up on using xml. I was just saying that xml processing is there in ss2000.

    That's the point, Jay, if you have intermediate users that don't mind to write in MSQuery a SQL command like

    Exec dbo.General_SP 'param1,param2,param3'

    but don't go so far as to develop a UI to get well-formed XML.

  • When I am developing a stored procedure for reporting purposes, something that is driven by one or more sets of parameters, the parameters are typically contained in a table. I'll have a table called something like [batch] that relates parameters contained in [batch_reports] and [batch_clients]. What gets passed to the stored procedure us just [batch_id], and the runtime parameters are keyed off of that. Each time a batch is executed, a record is inserted into [batch_extract], so I know exactly when the batch of reports were run and what parameters were used. Even for ad-hoc reports where the user specifies one-off parameters, I'll store everying in the same table structure just as if it were a canned batch, except it gets executed once instead of being scheduled.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • oh, I was never pushing xml so much as saying it is possible in 2k. The only point I made about the article was that you could possibly check for bad data with the following:

    If @@RowCount <> (LEN(REPLACE('as,df,sa,as,',',',''))/2)

    Begin

    ...

    End

  • I copied the code from your post and tried to run to better understand your logic.

    However, I was stuck in the first piece (the loop version).

    It seems you did not declare @tStates, which I am not sure why variable type it should be...

    Could you kindly help?

  • Jeff Moden (8/11/2010)


    My apologies... I got out of work quite late tonight and only had time for some verbal replies. I'll try to get to the coded replies which will include some speed tests for a lot of the methods included in this thread including the one from the article. As Pappy says, "One test is worth a thousand expert opinions".

    Thanks for waiting.

    I don't know if I'm whistling in the wind, beating a dead horse or what -- but I'll say it again: Erland Sommarskog has a comprehensive coverage of this topic at: http://www.sommarskog.se/arrays-in-sql-perftest.html. Including performance, test data, and literally a DOZEN different ways of doing it.

    On the other hand, it never hurts to test something yourself...

  • Jay Hopping (8/11/2010)


    oh, I was never pushing xml so much as saying it is possible in 2k. The only point I made about the article was that you could possibly check for bad data with the following:

    If @@RowCount <> (LEN(REPLACE('as,df,sa,as,',',',''))/2)

    Begin

    ...

    End

    If your only worry is about data length it seems ok. But 'XX' would render ok also. I studied the validating question from a (I hope) more general perspective in a previous post:

    http://www.sqlservercentral.com/Forums/FindPost967298.aspx

  • Eric Russell 13013 (8/11/2010)


    When I am developing a stored procedure for reporting purposes, something that is driven by one or more sets of parameters, the parameters are typically contained in a table. I'll have a table called something like [batch] that relates parameters contained in [batch_reports] and [batch_clients]. What gets passed to the stored procedure us just [batch_id], and the runtime parameters are keyed off of that. Each time a batch is executed, a record is inserted into [batch_extract], so I know exactly when the batch of reports were run and what parameters were used. Even for ad-hoc reports where the user specifies one-off parameters, I'll store everying in the same table structure just as if it were a canned batch, except it gets executed once instead of being scheduled.

    If I have undestood well, Eric, that solution predefines a collection of settings. It seems a reasonable perspective, but in my case I had no way to know 'a priori' what kind of combinations users would need.

    I used a similar technique to get the sets of parameters more used with I used a similar technique to capture the combinations of parameters more often required by users, with the idea of generating cache tables daily.

  • Christine Sun (8/11/2010)


    I copied the code from your post and tried to run to better understand your logic.

    However, I was stuck in the first piece (the loop version).

    It seems you did not declare @tStates, which I am not sure why variable type it should be...

    Could you kindly help?

    It's supposed to be the table where you'll get the splitted parameters:

    Declare @tStates table ( state char(2) )

    I guess I did a copy-paste of the declaration for the not looped sample, but it seems to have been rather a cut and paste-Sorry.

  • dbuendiab (8/11/2010)


    Jay Hopping (8/11/2010)


    oh, I was never pushing xml so much as saying it is possible in 2k. The only point I made about the article was that you could possibly check for bad data with the following:

    If @@RowCount <> (LEN(REPLACE('as,df,sa,as,',',',''))/2)

    Begin

    ...

    End

    If your only worry is about data length it seems ok. But 'XX' would render ok also. I studied the validating question from a (I hope) more general perspective in a previous post:

    http://www.sqlservercentral.com/Forums/FindPost967298.aspx

    Yes, that previous post does identify the wrong entries which mine doesn't. Though, I don't believe you understood my snippet so here's some code from the article with my change to detect bad entries and your change to identify the bad entries.

    soo... if I was stranded on some deserted island somewhere and I couldn't get to sqlservercentral.com to find out what was better, looping, charindex, clr, tally, xml, etc. I would prefer the following since it identifies bad entries even when good entries are submitted.

    -- Imagine this table exists previously in your database ----

    Declare @T_STATES table ( state char(2) )

    Insert @T_STATES Values ( 'AZ' )

    Insert @T_STATES Values ( 'CA' )

    Insert @T_STATES Values ( 'MT' )

    Insert @T_STATES Values ( 'TX' )

    Insert @T_STATES Values ( 'WA' )

    Insert @T_STATES Values ( 'TS' )

    Declare @sParameter varchar(8000)

    Set @sParameter = 'AZ,MT,YY,ZZ'

    Declare @tStates table ( state char(2) )

    Insert @tStates

    Select state

    From @T_STATES

    Where CharIndex( state, @sParameter ) > 0

    If @@RowCount <> (LEN(REPLACE(@sParameter,',',''))/2)

    BEGIN

    select @sParameter = replace( @sParameter, state + ',', '' )

    from @T_STATES

    --order by orderby

    Select @sParameter + ' are not valid values. Valid values are:' As InfoMessage

    Union All

    Select state

    From @T_STATES

    --Return

    END

    SELECT 'my report', * FROM @tStates AS ts

    edit 1: fixed copy/paste error where a string was getting parsed instead of the variable @sParameter.

  • I have published updates to the article by the author.

  • after further thought, I don't like my suggestion because it doesn't work with differing lengths of items.

  • Jay Hopping (8/11/2010)


    after further thought, I don't like my suggestion because it doesn't work with differing lengths of items.

    I was about to tell you. It operates only on fixed length parameters.

Viewing 15 posts - 61 through 75 (of 124 total)

You must be logged in to reply to this topic. Login to reply