The Zero to N Parameter Problem (Sql Server 2005 and Up -- Update)

  • Comments posted to this topic are about the item The Zero to N Parameter Problem (Sql Server 2005 and Up -- Update)

  • Interesting idea to use XML as a means of passing the parameters, could certainly be very practical.

    I am interested in the blanket statement: 'dynamic SQL is a bad idea'? Yes, if handled improperly there is the possibility of a SQL injection attack. However, this is mitigated with the use of the parameter list. The biggest obstacles I have found to deal with depends on the number of applicable predicate conditions. If the list is short, say less than or equal to ten conditions, then code maintenance isn't that bad. As that list increases, I will admit that it becomes a hassle to maintain.

    Erland Sommarskog provides probably the best documentation on how to handle dynamic search conditions. He provides multiple solutions, evaluating the pro's and con's of each:

    http://www.sommarskog.se/dyn-search.html

    And then choose to view the document based on the version of SQL Server that fits your situation.

  • My comment "dynamic SQL isn't a good idea" is directly related to injection attacks, as you state.

    What I like about this solution is that everything (client side with a strong dataset, and this procedure on the server side) is "strongly typed" and isn't alot of string concatentation.

    Client side, I would write something like

    ParameterDS ds = new ParameterDS();

    ds.ScalerRow srow = ds.NewScalarRow();

    srow.FromDate = DateTime.Now;

    srom.ToDate = DateTime.Now;

    ds.Scalar.AddScalarRow(srow);

    something like that.

    So I get strong typing.

    It's not the only way to skin a cat, but it's my preferred and still serves me well after 12 or so years.

    Thanks for the feedback.

  • Note, this article is an update from a previous (pre-Sql Server 2005) version, here:

    http://www.sqlservercentral.com/articles/Stored+Procedures/thezerotonparameterproblem/2283/

  • This is very nice,

    but there's a better solution for the same issue.

    http://www.madeira.co.il/fully-dynamic-search-conditions-vs-sql-injection

  • Thanks for the article,

    Consider using strongly typed xml and discarding fuzzy checking like datalength > 10

    So you're SP's input parameter might be something like @parmeters XML (DOCUMENT dbo.MyXSD)

    http://technet.microsoft.com/en-us/library/ms184277.aspx

    You may also consider user defined table type (UDTT) parameters as table valued parameters (TVP), if your calling code is already manipulating datasets.

    http://technet.microsoft.com/en-us/library/bb510489(v=sql.105).aspx

  • I always get a warm and fuzzy feeling when I see XML and TSQL so kudos for your solution. Since you are using XML, you can also pass in multi-hierarchy documents and or document fragments - which can give you more power. Please note the table and table valued parameters comment someone made and even the XSD idea can probably be perceived as superior alternatives. I wonder though, does the sproc scale - given the fact that there may not always be the same parameters coming in. I ask because I came across a situation recently whereby a sproc had overloaded input parameters - not every time would all the parameters be set. This caused the execution plan to be somewhat less than optimal.

    Good article and thanks for sharing!

  • Not a bad tool to have when flexibility trumps all other design objectives. Thank you for taking the time to contribute. 🙂

  • I appreciate every article because it is an exchange of ideas and it does take an author time to put together. I would like to suggest a couple of improvements, though. Some of the suggestions have already been mentioned but I've included my thoughts on them in the following for completeness. I hope you take them as helpful rather than any form of personal attack.

    The NorthWind database is an SQL Server 2000 sample database. Since the title of your article clearly states "Sql {sic}Server 2005 and Up --Update", I strongly recommend that you modify your examples to work with the newer sample databases available as of SQL Server 2005.

    In that same vein, I also recommend that you update the comments in the code because "Query Analyzer" does not exist in "SQL Server 2005 and Up".

    The code is setup to shred some XML but the parameters in the example usage of the stored procedure are not in XML form in the article. That may be more the fault of the tool used to submit the article but a posted correction on your part would be mighty handy because those examples would show the necessary form of the parameters (example usage) for the stored procedure to work.

    As someone else mentioned, I strongly oppose making blanket statements such as the following from the article.

    A comma delimited list just does not seem to cut it anymore, and dynamic SQL isn't a good idea.

    If you want to know what a good idea and just how injection proof dynamic SQL can actually be, please see the following article.

    http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/

    Then, to follow up such blanket statements with the following...

    This method is not a "cure all" method. There is a performance penalty for using this method. If you have a procedure where performance is the main goal, and you have a lot of records to go through, this solution may not be the answer.

    ...{snip}...

    Good luck. The method is useful ~at times. Please be aware of possible scalability issues, since there is a performance hit by making the procedure so flexible.

    ...{snip}...

    Please heed my warning about scalability, and do not consider this a cure-all.

    ... seems to indicate that maybe the methods that you summarily dismissed in a blanket fashion might not be so bad after all. You also didn't mention that passing table valued parameters is also a highly effective method that avoids the pain of shredding XML nor any possibilities of what else one might use if the code does, in fact, produce scalability issues.

    The following statement is also a bit misleading to the uninitiated who may be looking for a solution to the "0 to N" parameter problem...

    ...or said another way, I want to specify N parameters, where N is 1 through infinity.

    Although the XML data-type can certainly accommodate a rather large number of parameters, its limited to 2GB and "infinity" isn't actually possible. Although I thought the related application was poorly designed, I have had to accommodate the passing of more than 2GB of parameters from an application and a single XML variable won't cut it despite the claim of "infinity". Further, passing a table variable parameter might, in fact, allow one to exceed that 2GB limit (as crazy as that may be).

    You also wrote…

    I learned this method from a previous colleague. The example is an original one coded against the Northwind database.

    To add some additional credibility to your article, I recommend that you review the code and apply some common best practices such as using the 2 part naming convention, using table aliases on all column references where more than one table is referenced by a statement, using consistent casing, avoiding multiple statements (DECLARE) on one line, and maybe even taking the time to consider reducing the line length to prevent the necessity of have to scroll-right to see the end of a line and to facilitate easier reading.

    You might also want to "modernize" some of the techniques. For example, doing a SELECT COUNT(*)to figure out the row counts of the table variables that you just INSERTed into could be replaced by the much more efficient SELECT @CounterVariable = @@ROWCOUNT.

    And consider getting rid of all the empty "/* */" in the code. At best, they're annoying and the proper use of white space would make the code more pleasurable to read.

    This article could be a fantastic article but it needs some tweaking.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • gary.strange-sqlconsumer (12/16/2013)


    Thanks for the article,

    Consider using strongly typed xml and discarding fuzzy checking like datalength > 10

    So you're SP's input parameter might be something like @parmeters XML (DOCUMENT dbo.MyXSD)

    http://technet.microsoft.com/en-us/library/ms184277.aspx

    You may also consider user defined table type (UDTT) parameters as table valued parameters (TVP), if your calling code is already manipulating datasets.

    http://technet.microsoft.com/en-us/library/bb510489(v=sql.105).aspx

    If the parameters are more often .. different than the same.......then consider the

    OPTION(RECOMPILE)

    hint.

  • In rather stark contrast to what other folks have suggested about the use of XML to pass parameters, I usually wince at the thought. Because of "Tag Bloat", I concern myself with the extra I/O on all the systems and "pipes" from the end user to the database especially on systems that have a high hit rate. Because most people forget about it, I also concern myself with the need to de-entitize XML data for special characters and the fact that takes some bit of extra resources to accomplish.

    Don't get me wrong. I think that CSV parameters are horrible even if proper CSV formatting is used instead of the formatting travesty that CSV has come to mean but if one does a comparison of methods using typical "simple" parameters, which consitute much of the parameter traffic, I'd rather use CSV because it causes much less traffic on the I/O systems across the board.

    In case you're wondering, I'd like to see a return of the very sleek ASCII control characters to pass parameters with but no one will go there for multiple reasons... humans can't figure out what the "little square boxes" in Notepad mean and it's not likely that software providers will ever make such control character usage a standard because humans don't like to see such things as FS, GS, RS, or US in their transmission text. And, yes, passing column meta-data would be just as easy.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I think this use case is better managed using client side dynamic sql generation. I'm not talking about concatenating strings to build the sql statement. I'm refering to using ORM's such as EntityLite https://github.com/jesuslpm/EntityLite:

    for example:

    var orderIds = new List<int> { 10248, 10248, 10250 };

    var orders = dataService.OrderRepository.CreateQuery(Projection.BaseTable)

    .Where(OrderFields.OrderId, OperatorLite.In, orderIds)

    .ToList();

    Advantages:

    * No XML parsing

    * No table variables

    * Efficiency

    * Easier

    Limitations:

    * Max number of id = max number of allowed parameters.

    I think stored procedures are better suited for set based modifications, but not for perform dynamic queries.

  • jesuslpm (12/16/2013)


    I think this use case is better managed using client side dynamic sql generation. I'm not talking about concatenating strings to build the sql statement. I'm refering to using ORM's such as EntityLite https://github.com/jesuslpm/EntityLite:

    for example:

    var orderIds = new List<int> { 10248, 10248, 10250 };

    var orders = dataService.OrderRepository.CreateQuery(Projection.BaseTable)

    .Where(OrderFields.OrderId, OperatorLite.In, orderIds)

    .ToList();

    Advantages:

    * No XML parsing

    * No table variables

    * Efficiency

    * Easier

    Limitations:

    * Max number of id = max number of allowed parameters.

    I think stored procedures are better suited for set based modifications, but not for perform dynamic queries.

    My experiences with the code that some ORMs (mostly Link2SQL... .Net Framework seems to be ok) produce has been less than pleasant. They tend to lean towards using NVARCHAR constants for character based data instead of using the datatype (VARCHAR) of the table column and that means, of course, index scans at best because of the datatype mismatch and datatype precidence.

    I've also found that the code they produce can be horribly bulky and inefficient even for typical C.R.U.D. They will many times include wierd sub queries that return way too many columns and produce totally unnecessary self joins as a result.

    Of course, finding those performance and resource usage nightmares is a wee bit more difficult that finding performance challenged stored procedures and fixing them requires redeployment of front-end code.

    Constructing well-written injection-proof dynamic SQL for things like "catch all" queries just isn't that much more difficult and it makes it easy to tune the queries as the scale of a database increases.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • >> My experiences with the code that some ORMs (mostly Link2SQL... .Net Framework seems to be ok) produce has been less than pleasant. They tend to lean towards using NVARCHAR constants for character based data instead of using the datatype (VARCHAR) of the table column and that means, of course, index scans at best because of the datatype mismatch and datatype precidence.

    <<

    EntityLite uses the same type for the parameter as the column. If the column is varchar the parameter is varchar.

    >> I've also found that the code they produce can be horribly bulky and inefficient even for typical C.R.U.D. They will many times include wierd sub queries that return way too many columns and produce totally unnecessary self joins as a result. <<

    EntityLite produces very efficient, controlled and predictable SQL code. EntityLite is different from other ORM's, because instead of being object oriented, EntityLite embraces the relational model.

    >> Constructing well-written injection-proof dynamic SQL for things like "catch all" queries just isn't that much more difficult and it makes it easy to tune the queries as the scale of a database increases. <<

    Server side dynamic sql is very difficult to maintain and tune. And using a ORM helps you to prevent sql injection attacks.

    http://www.codeproject.com/Articles/753796/i-nercya-EntityLite-A-Lightweight-Database-First-M

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply