How to Build Dynamic Stored Procedures

  • Actually, both are part of formatting the output (when printing the SQL) that do not affect the actual executed dynamic SQL.

    However, I usually do this a little differently. First of all, I use CRLF and not just LF: CHAR(13) + CHAR(10)

    Also, I assign these to a variable early in the procedure to prevent mistakes and from having to type them repeatedly, as shown here:

    DECLARE @CRLF char(2), @TAB char(1)
    SET @CRLF = CHAR(13) + CHAR(10)
    SET @TAB = CHAR(9)

    Now you not only have a nicer, self-documenting variable, but also, if you mistype the variable, the compiler catches it, but if you mistype the CHAR function values (9, 10, or 13) that won't get caught at compile time.

    --Peter

  • Yes, input validation. You trap for the known mechanisms for SQL Injection. For instance look for single quotes coming in and double 'em. Watch for the --. Most of all, do validation on variable type where possible. For instance, if you know the field is supposed to be an integer, test it. Things of that sort. Basically the same sort of validation tests you code into a web app you could put in UDFs and test the parameters coming in.

    K. Brian Kelley
    @kbriankelley

  • See also:

    http://jtds.sourceforge.net/apiCursors.html

  • I like the formatting hints in your article. I would have saved some trial and error time if I'd read this a few years ago.

    Personally, I've used dynamic SQL everywhere, due to the unique requirements of supporting a vendors db (each client in it's own self similar database).

    Dynamic SQL that gets concatenated and executed at run time always proves to be hard to debug (and tends to have bugs), and testing all possible permutations is hard to accomplish. It's especially hard to figure out where data issues stopped the process.

    I've been doing something similar to JIT (Just in time compile), by dynamically creating permanent stored procedures, then executing those procedures the next time the same query is run. It becomes a little more difficult when dealing with a large number of parameters, but as long as you keep data and metadata separate you are golden (IE: one procedure for searching with first and last name only, but not one procedure for searching for firstname = calvin and lastname = lawson).

    Maybe I should submit an article on that. It answers most of the complaints about dynamic SQL.

    cl

    Signature is NULL

  • FYI, see the article "The Curse and Blessings of Dynamic SQL" by Erland Sommarskog, SQL Server MVP at

    http://www.sommarskog.se/dynamic_sql.html

    To workaround the the size limitation, one approach is to use views. As the dynamic SQL uses the view, it is often simplier, easier to debug and is often shorter than the 4000 character length restriction of sp_executesql

    SQL = Scarcely Qualifies as a Language

  • all, you can concat in sp_executesql so you can go beyond the 4k limit...

  • I have learned to detest code that builds dynamic SQL by repeated concatenation, especially with char functions for line breaks and tabs.  A complex statement is difficult to read when it is pieced together with a bunch of CHAR, CAST, and CONVERT fuctions.

    SQL is perfectly happy to interpret multiline strings with all line breaks and tabs included.  I create a template of the complete SQL statement with tags for all variable parts, then use REPLACE functions to handle the modifications.  This is especially useful when one variation requires changes in the field list, tables, and where clause.  Another advantage of REPLACE is it can do implicit conversions to string of integers and avoid '...' + CAST(x AS VARCHAR) + '...'.

    -- Basic template

    SET @sql = 'SELECT fld1, fld2<fields>

          FROM tbl1

          <joins>

          WHERE tbl1.xyz=0 <filters>'

    -- One logical test controls additional fields, joins, and filter clauses

    -- All replaceable tokens are duplicated in the replacement strings to allow further actions

    IF modification1 = 1

          SET @sql = REPLACE(REPLACE(REPLACE(@sql,

          '<fields>', ', fld3, fld4<fields>'),

          '<joins>', 'INNER JOIN tbl2 ON tbl1.fld1 = tbl2.fld1

          <joins>'),

          '<filters>', '

                AND tbl2.pqr IS NOT NULL <filters>')

    -- Further modifications using the same logic

    -- Strip out any remaining tokens

          SET @sql = REPLACE(REPLACE(REPLACE(@sql,

          '<fields>', ''),

          '<joins>', ''),

          '<filters>', '')

  • In a tangential topic, I've built a process using dynamic sql which reads through all the databases on my servers to determine which indexes need reorganizing.  Works great on all our servers, except for the ones with outside apps, where there the table owners are not dbo.  I'm attempting to adapt the process to append the database owner name to the tablename -- building a variable ahead of time -- however I get a compile error whenever I have a variable in the "from" clause.

    Is there a proper way to handle this that I'm missing?

  • Presumably you're getting the index info from sysindexes?  Join to sysobjects to get the owner.  It might look something like this:

    select

    'DBCC DBREINDEX (''' + quotename(user_name(so.uid)) + '.' 

          + quotename(so.name) + ''',''' + rtrim(si.name) + '''[, fillfactor]'

    from sysindexes si

    inner join sysobjects so on so.id = si.id

    where xtype = 'U'

          and indexproperty(si.id, si.name, 'IsStatistics')=0

          and indexproperty(si.id, si.name, 'IsHypothetical')=0

  • I really enjoyed the article.

     

    There is  huge benefit for dynamic stored procedures in when vb.net and sql2000 work togeather;.

     

    thanks!

    erik

    Dam again!

  • Thanks for the article. I am currently re-writing a chunk of my code because, in this case, it's going to give me a huge performance gain. The format I have settled on is:

    DECLARE @query nvarchar(4000), -- // nvarchar using sp_executesql

    @LN char(1),

    @TB char(1),

    @lt-2 char(2)

     

    set @LN = char(10)                  -- // Line Feed

    set @TB = char(9)                   -- // Tab

    set @lt-2 = char(10) + char(9)        -- // Line feed + Tab

     

    set @query = ''

     

    set @query = @query

    + 'select column1,' + @lt-2

    + 'column2' + @LN

    + 'from table1 (nolock)'

     

    It is a little neater than having char() in lots of places.

  • One more time, SQL Server is happy to let you use multi-line strings with embedded tabs.  The end-of-line is just white space to the SQL syntax analyzer, you don't have to close the string on each line and start the next with "+".

    Instead of using all that concatenation, just use:

    set

    @query =

    'select column1,

         column2

    from table1 (nolock)'

    I can't enter a tab in this text box so there are four spaces before 'column2' instead of a tab, but you can use the tab character in Query Analyzer.

  • will someone please help and o'll wondering soul out by giving me a human version of (NOLOCK)??

     

    THANKS!

    ERIK

    Dam again!

  • will someone please help and o'll wondering soul out by giving me a human version of (NOLOCK)??

     

    THANKS!

    ERIK

    Dam again!

  • NOLOCK aka READ UNCOMITTED aka "dirty read"

    Allows reading a table in spite of any uncommitted transactions that may be in progress on other connections.

    Pro: It saves time by not creating locks, and by not waiting for or blocking other transactions.  It only requires a schema lock on the table, rather than creating locks for all the rows/pages/extents that are read.  It ignores exclusive locks owned by other connections instead of waiting for those transactions to complete.

    Cons: You can read data from partially completed transactions, then those transactions may be rolled back and the data disappears.  For instance, you have a query WITH(NOLOCK) to sum the sales for each salesman to calculate commission payments.  Just before you run this, someone begins a transaction showing they sold the Brooklyn Bridge for $2 billion.  After your comission query runs and calculates a huge bonus for this guy, the transaction is rolled back and there is no trace of the data to explain what happened to the accountants.

    It is very useful for reporting queries on production databases, if you know that either the data you're reading is static or you don't mind having counts and totals being a little off.  A history table that sees INSERTs but no UPDATEs for example, or maybe a query summarizing last week's data when only today's data might be volatile.  An inventory query that is checking whether there are less than 5000 #8 wood screws on hand to decide when to reorder is not going to notice the difference between 3875 and 3750.

    It obviously is not recommended for accounting, or any application where exact numbers are required.  An inventory query that is a little fuzzy about whether there should be 2 or 3 Ferraris on the showroom floor is a bad idea.

Viewing 15 posts - 16 through 30 (of 38 total)

You must be logged in to reply to this topic. Login to reply