Dynamic SQL or Stored Procedure

  • what about bind paramertised queries? best of both worlds?

  • what are the benefits of a compiled query plan?

  • As a developer, the biggest benefit I get from using stored procedures is ease of development. By using stored procs exclusively, all applications have a well-defined interface to the database. Complex queries can be written by developers with more SQL experience and used by those with less experience. SPs also provide a degree of modularity and code reuse in applications. I've found that maintainance is decreased because its easier to do component-level testing and optimization for stored procs than it is for client-generated dynamic SQL. Any added performance is just an added bonus for me. Development time (up-front and down the road) is more costly than buying a beefier server.

    It is true that sometimes dynamic SQL is necessary. When this is the case, I still try to encapsulate this within a stored procedure so that I get at least some of the benefits of testing and modularity.

    Chris Cubley, MCSD, MCDBA

    http://www.queryplan.com

  • quote:


    what are the benefits of a compiled query plan?


    Faster execution. With a compiled query plan, SQL doesn't have to figure out the best way to execute the code, it reuses an old plan.

  • Bind queries - as in true ODBC binding?

    Thank you all for your comments so far. Yes, faster execution with a compiled query plan - usually only makes a difference if the procedure gets called frequently, otherwise there is the chance the query plan will get aged out of cache.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Thank you for informing us that sp_executesql parameters may be OUTPUT parameters. I didn't find that anywhere in BOL.

    Here is an example:

    USE pubs

    declare @sql nvarchar(3000)

    declare @d datetime

    set @sql = N'

    -- compute the same or next date that is in Sales table.

    SET NOCOUNT ON

    declare @o_date datetime, @p_date datetime

    set @p_date = convert(datetime,convert(varchar(10),@date, 120)) -- chop off hours

    select @o_date = min(ord_date) from Sales where ord_date >= @P_date

    if @@rowcount = 0

    select @o_date = max(ord_date) from Sales where ord_date < @P_date

    if @o_date is not null

    set @next_date = @o_date

    else

    set @next_date = @p_date

    '

    exec dbo.sp_executesql @sql

    , N'@date datetime, @next_date datetime OUT'

    , '1993-05-25', @next_date = @d OUT

    print @d

  • Often we find ourselves with search screens that allow the user to enter one or more search criteria. Since we can't predict what parameters they'll pass we build the SQL statement in the stored procedure and dynamically build the WHERE clause based on what they pass in. We then use sp_executesql to run the procedure. The procedure itself has as at least as many params as their are search fields and they're all set to NULL so they're optional.

    We've found this method to be very efficient.


    Steven M. King
    Systems Consultant IT

  • Why do you consider that a better approach than building the statement in the app?

    Nice screen name, though Im not a huge fan of SIG!

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Nice article that I plan on using when someone looks at me with a blank stare when I try to explain the difference between static and dynamic queries.

    The application I work on is mostly dynamic SQL. I am one of 5 developers that work on this application. When static SQL has been used most of the stored procedures are mine (this is because these are often used querys that populate grids and I was looking for ways of increasing performance). When I asked why the other developers did not use stored procedures more they all pretty much said the same thing: It's a pain to leave the development IDE and launch Enterprise Manager or SQL Analizer and write up the query and save it as a stored procedure, set the permissions, etc. It's much easier to just write the code in the development IDE (where you are in the first place). The other complaint is that often it is not worth the time and trouble of getting permission from the DBA or overcoming other buracratic policies just to write a simple query. Also, I think that many of the developers feel that because they don't really know T-SQL very well or at all and really would not care to learn it since dynamic SQL has worked just fine so far...

    My 2 cents.

  • I've tried an approach of sending optional params (with default value=NULL) to the Search-sproc, and in it using a single-static SELECT statement with lot's of OR operators in the WHERE clause..

    Elegant but it turned out to be SLOWER THAN DYNAMIC SQL!

    Still, I found that encapsulating your SQL statements within sprocs does improve managability of code (ever tried to trace-back table-dependencies in an ASP website?...)

    Any ideas on improving this method would be appreciated!

    See the following samplecode to understand what i mean:

    create proc mySearch(

    param1 as integer=NULL,

    param2 as varchar(20)=NULL

    )

    AS

    select * from mytable where

    --this will evaluate to TRUE if param1 was not specified, regardless of field1:

    (field1=param1 OR param1 IS NULL)

    AND

    --this will evaluate to TRUE if param2 was not specified, regardless of field2

    (field2 like '%'+param2+'%' OR param2 IS NULL)

    ...

    p.s. I'm still looking out for a way to code paging in static SQLs (i.e. fetch only records 20 through 30 from a given SELECT statement)...

  • You really get a feel for the performance downside of dynamic SQL if you develop web applications.

    In most cases the difference in cost between dynamic and static SQL is very small, but multiply this up by 100,000 users and all of a sudden you have a massive load on your servers.

    To the gentleman who found that SQL with a lot of ORs in his WHERE statement made dynamic SQL faster than compiled SQL.

    Look at the execution plan, OR statements tend to force table scans. If possible use UNION queries with selects against indexed columns. Although long winded the compiled query will run faster.

    Secondly let us suppose you have three tables, all linked on a single common field. A human being will join A to B and B to C knowing that A is implictly joined to C. SQL doesn't know this. You may find that explictly stating the A=C join (although long winded) allows SQL to choose a better execution plan.

    Thirdly, watch what you try and bring back from a query and which table you get it from.

    A clustered index holds the data in its leaves where as a non-clustered index holds a pointer to data in its leaves.

    If you want to retrieve the common field shared between A, B and C then if that common field is in a clustered index on table A, make sure that you retrieve the data from table A. This will be quicker than getting it from B or C.

    Ditto watch out for the dreaded SELECT * FROM. This will force a table scan, which is inherrently slow. By carefully specifying a limited number of fields you can improve performance.

  • I can see Andy’s points about the advantages of using stored procedures, but I guess I'm swimming up stream here... I’m not trying to start a flame war, but there is another side to stored procedures…

    Where I work, we have several development groups. Some groups are "stored procedure happy" and other groups use very few, if any, stored procedures. The biggest problem I see with stored procedures is that they can be nearly impossible to debug. (I never considered the COBOL-like “print” statement a very attractive debug tool.) One of the "happiest" development groups uses stored procedures for even basic statements like "select * from table", and they tend to nest their more complicated stored procedure calls within each other. Often, the stored procedures are several hundred lines in length, sometimes a stored procedure will pass more than 100 parameters to another stored procedure, and comments in the code can be rare. It can be extremely time consuming to chase down a bug seven or eight nesting levels deep.

    Yeah, I know that Visual Studio has a stored procedure debugger, but the happy group uses a third-party development tool for coding. Although I’m not very familiar with Visual Studio, I'm thinking that a developer debugging a stored procedure with Visual Studio could leave locks in a database while taking a break for lunch--which could generate a brand new set of “performance” phone calls.

    I suppose there is a “happy medium” (no pun intended) somewhere between too many and too few stored procedures, but we are still looking for it.

    Jon

  • I would suggest that this is a documentation issue and development practices issue.

    Unfortunately, once an organisation grows beyond a certain point formal documentation and methodologies become a necessity rather than something to do in quiet periods.

    By the time someone comes along and bemoans the lack of documentation they are facing massive corporate inertia in terms of

    • We have never needed this before.
    • This adds cost and can't be justified in terms of payback.
    • Who do you think has the time to do this?

    All these points are answerable but need careful phrasing and diplomacy to win the day. I remember someone saying that a diplomat is someone who can tell you to go to hell and have you looking forward to the trip!

  • David.Poole,

    Point well taken. Better documentation would obviously help. And certainly, office politics are a factor. However, I still maintain that the tools for debugging stored procedures are primitive at best (the process of debugging triggers is even worse), with the possible exception of Visual Studio, which can generate problems of its own.

    I just don't see stored procedures as a magic bullet... Maybe Yukon and its tool set will improve the situation.

    Jon

Viewing 15 posts - 1 through 15 (of 63 total)

You must be logged in to reply to this topic. Login to reply