Get a Return Every Time.

  • Comments posted to this topic are about the item Get a Return Every Time.

    [font="Arial"]Thank You,
    [/font]
    [font="Comic Sans MS"]Charlie[/font]

    [font="Arial"]Charles Eaton[/font]

  • Hello

    I am currently working on a system where Dynamic SQL is used extensively (including dynamic table names, and cursors :(( ). It's a nightmare to work with and debug - to make things worse SQL doesn't usually cache dynamic queries - so they are incredibly inefficient (one procedure i'm working on takes 26 min to run and got trimmed down to 2 min by making it static as possible)

    To debug use PRINT() statements (specially on @@ROWCOUNT) in your code since they are ignored by applications, but can be seen in Enterprise manager when debugging.

    Robin

  • Hi,

    I prefer to use static SQL in all cases where possible and only use dynamic SQL when there are a lot of where clause criteria to the stored proc. This is due to:

    - Far easier writing and debugging

    - Reuseable execution plans (in most cases)

    - Much tighter security

    Easier writing

    With static SQL you have WYSIWYG with syntax highlighting and no problem with missing spaces, spelling mistakes etc.

    Reusable execution plans

    When resorting to dynamic SQL remember that the query plans are not reuseable when you use the EXEC(). Using the "sp_executesql" gives you that option along with the possibility of using both input and output variables. "sp_executesql" is almost always superior in both flexibility and performance to the EXEC() alternative.

    Tighter security

    With static SQL you can grant access to stored procs onlyt. With dynamic SQL you will have to grant the executing user SELECT access to all the references tables. That may work for you, but it's definitely not to my taste.

    To sum it up there are certainly situations where dynamic SQL is beneficial, but it is necessary to understand the implications that may tag along.

    Sincerely,

    Lasse

  • I have found dynamic SQL statements to be faster in many situations where the exact nature (and number, when using optional parameters) of passed parameters is unknown at the time a static stored procedure is saved.

    In these cases the query is often not optimised to the execution plan that is the best plan for the parameters employed and their values and a dynamic query that forces parsing and optimisation will run 2x to 3x faster.

    Additional Tips for enhanced readability of code:

    Use plenty of whitespace, it makes no discernable difference in execution time;

    Build string literals into your dynamic queries using double quotes and then use the Replace() function to fold them to single quotes before execution;

    [font="Courier New"]SET @SQL =

    '

    SELECT LastName + ", " + FirstName

    FROM MyTable

    WHERE LastName LIKE "knigh%"

    '

    SET @SQL = Replace(@SQL, '"', '''')

    EXEC(@SQL) [/font]

    I use these methods in an 8000+ user classic ASP app, with SQL 2000 databases having 6 to 10 million records in the main transaction and master balance tables--with very acceptable performance.

  • Robin, Lasse, you guys need to leave the 90's and learn about sp_executesql.

    Charles, personally I think you're creating a nightmare for whoever comes after you. All the code being in strings means a mass of red coloured text in QA or Management studio. People might also quite easily try and guess the first param, see the a 0 returns *something* and think their query just returns no rows. There's also an added layer of gotcha's of making sure all your quotes are correctly double quoted, etc.

    I'm gonna put my hands up here in that I'm no expert on execution plans or the inner workings of SQL, but I think all those IF statements could cause the execution plan to be recreated a lot and cause extra overhead for SQL to check the truth of every IF statment.

    You're better off letting a proper programming language do the logic and letting SQL do what it does best, set based queries!

  • I am quite surprised that someone thinks querystring based queries are faster than compiled queries.

    Cliff - in your example, you will mostlikely have same performace as a compiled SP since you query is using only a single filter which may be having an index on it. So the performance may be more or less the same.

    If you are getting better performance using dynamic it will most likely boil down to your statistics being out of date. DBAs are supposed to keep updating the stats or conigure auto-stats so that the compiled SP can make better decision. So the db size does not matter much if you are using only 1 index which can be easily determined. The problem more or less come when you have complicated sql which may time take to determine and best way is to have the query compiled with updated stats.

    Using dynamic query is gives rise to SQL Injection - way to hack into your db and even delete your data. So you have to be very carefully and should avoid using it.

  • @charles Eaton

    Your SP is fine to use if you are using it for maintenance purposes. For transactional purposes, this is typical what you should not be doing.

  • Use the feedback you are receiving constructively. There are many solid, spot-on points that have been mentioned. In my humble opinion, it is always good practice to seek out and recognize the “gurus” that you work with as you have done in your article. They are the ones that help us learn more or give us a cool technique that can be extended or morphed.

    The dynamic SQL stuff that you are doing is pretty cool – build on it more, use it for other things, and use the feedback in this forum constructively. The sp_executesql function will most definitely broaden your horizons. I suggest you look into it and perhaps incorporate some of the cool techniques that you wrote about.

    Great article and keep up the good work.

    -Mike Di Renzo

  • Have to agree with the others...

    Whilst I understand that you use this pattern as a way of debugging and inspecting what your procs do, I think good commenting, some debug prints (commented out for production) and the inherent problems with

    * formatting

    * ease of maintenance

    * potential problems with SQL injection (although I see you've used quotename)

    * performance

    * ..... etc

    outweigh any benefits.

    Having said that, I definitely appreciate you writing the article and getting to see different approaches to how people work.

    You should also be aware that if you wished to have a break in your strings you can do something like

    set @sql = 'this is the first line

    and this is the second line'

    without needing cumbersome @crlf variables, plus signs, extra quotes, etc.

    Your runmode 0 can be done without dynamic SQL as you need simply include

    where @runMode <> 0

    at the end of your select clauses (or use and @runMode <> 0 if you already have a where clause)

    runmode 1, where you show the query text, is probably best accomplished by just examining the stored procedure with syntax highlighting, etc.

    runmode 2 is obviously where you want to execute and return data - the case above for mode 0 takes care of this.

  • @charles

    Check this query out - I am sure this will be lot faster than your dynamic one AND this allows use of variables in the WHERE clause which do not need to be passed as well. This is just to give you an idea how you can do without dynamic queries.

    SELECT C.CustomerID AS CUSTOMER_ID, C.CompanyName AS COMPANY_NAME,

    C.ContactName AS CO_CONTACT, C.City AS CITY, C.Country AS COUNTRY,

    C.Phone AS CUST_PHONE,

    O.OrderID AS ORDER_NUMBER,

    SUBSTRING(E.FirstName,1,1) + char(32) + E.LastName AS ORDER_ENTERED_BY

    FROM CustomersC

    JOINOrdersOon C.CustomerID = O.CustomerID

    JOINEmployeesEon O.EmployeeID = E.EmployeeID

    WHERE (C.Country= ' + QUOTENAME(@Country, '''') OR @Country = '')

    AND (E.LastName LIKE '%' + QUOTENAME(@Employee, '''') OR @Employee = '')

    ORDER BY COMPANY_NAME

    Also another pointer - dont use LIKE operator with % at the begining of the search pattern - this will cause a table/index scan and hamper query's performance. Instead use CONTAINS function that works along with Fulltext indexing i.e. MS Search Engine.

    Cheers

    Randeep

  • hmmm maybe over-engineered? Surely variable checking should be at presentation tier?

    I've grown allergic to dynamic-SQL, tended to find it such a performance killer and generator of bugs in its own right.

    While always recompiling may deliver better performance, would have thought this would be the exception rather than the rule.

    I've arrived at the point of only using dynamic-SQL as an absolute last resort (and that equates to never at the moment )

    Thanks for insight into another approach though 🙂

  • ( Insert USP_Get_Customer_Data Script)

    I don't see the actual example script........ bad merge? missing file?

    Takes some of the kick out of the article. Also, I see some limited usefulness for development, but I can't imagine depending on non-compiler optimized procedures in a production envirionment.

  • I have to throw in here. I have had similar thoughts as to the author in the past. However, at the time, I was working on low volume systems.

    For the last year and a half I have been working on a much higher volume (1,500 transactions a minute) system with a complex structure (900+ tables) and have seen the evils of using dynamic SQL for all the points given in this forum. Since I suspect the author is working on much lower volume systems, I am betting they do not see the ill effects (yet). From that perspective, let me say:

    1)Performance: When SQL is consistently busy, recompiling plans on every execution drains resources in a way that is very noticeable to an end user. This is, of course, death to a large system.

    2)Performance: Yes, I have seen some queries run faster under dynamic SQL, but I have seen the opposite more often. When a query is faster under dynamic SQL, it is usually because the search screens are written TOO flexibly, and the query cannot anticipate all reasonable combinations of parameters. I’d strongly suggest instilling in the development team that separate searches for different TYPES of searches be developed, or have your procs “sense” what type of search the user is trying to perform and route the query to an appropriate static stored proc so that the query plan can be cached.

    3)Performance: When using complex dynamic SQL, it’s just too easy to get knocked off an index and start a table scan. Again, you won’t see this too often on smaller systems that join three 40,000 record tables, but when you’re working with 6 1,000,000+ record tables, you’ll see it immediately.

    4)Maintainability: When there are 20 different coders in the environment, any pattern that is not well known (no matter how worthy) is confusing.

    5)Security: Dynamic SQL will execute with different security contexts depending on how you connect (Trusted connection, SQL Login, etc) and you will find that if you start implementing security that the dynamic SQL queries may start not having access to tables!!! This one bit me very hard!

    I say all this not to take a shot at anyone, but hopefully to share my learned experience.

    Ben

    http://www.sqlcoach.blogspot.com

  • Thank You all for your comments. I will use your posted tips, and pitfalls to expand on my knowledge. This is a learning experience for me.

    [font="Arial"]Thank You,
    [/font]
    [font="Comic Sans MS"]Charlie[/font]

    [font="Arial"]Charles Eaton[/font]

  • Cliff Knight (3/4/2008)


    I have found dynamic SQL statements to be faster in many situations where the exact nature (and number, when using optional parameters) of passed parameters is unknown at the time a static stored procedure is saved.

    In these cases the query is often not optimised to the execution plan that is the best plan for the parameters employed and their values and a dynamic query that forces parsing and optimisation will run 2x to 3x faster.

    Additional Tips for enhanced readability of code:

    Use plenty of whitespace, it makes no discernable difference in execution time;

    Build string literals into your dynamic queries using double quotes and then use the Replace() function to fold them to single quotes before execution;

    [font="Courier New"]SET @SQL =

    '

    SELECT LastName + ", " + FirstName

    FROM MyTable

    WHERE LastName LIKE "knigh%"

    '

    SET @SQL = Replace(@SQL, '"', '''')

    EXEC(@SQL) [/font]

    I use these methods in an 8000+ user classic ASP app, with SQL 2000 databases having 6 to 10 million records in the main transaction and master balance tables--with very acceptable performance.

    In these situation you don't have to use dynamic queries but use the WITH RECOMPILE clause in the procedure definition. This will cause to recompile and rebuild the execution plan with every execution of the stored procedure.

    Related to the article itself it is by NO means the way to go. Using the dynamic queries is not something for the development or the returned result convenience but to the situations where it CAN'T be avoided, like having to use different tables or columns depending of the situation.

    Point is use dynamic queries ONLY when absolutely necessary. Other than that static queries is the way to go.

    Don't just give the hungry man a fish, teach him how to catch it as well.

    the sqlist

Viewing 15 posts - 1 through 15 (of 15 total)

You must be logged in to reply to this topic. Login to reply