Stored Procedures Reconsidered

  • Comments posted to this topic are about the item Stored Procedures Reconsidered

  • Interesting post. One that will no doubt generate some responses here.

    I disagree with the scope of the post. Or rather, I sense a compression of incompatible scopes in this editorial. Allow me to explain:

    It's somewhat common for SQL Server database professionals to narrowly view the server platform. We (I include myself) see our little niche and believe the rules for our niche apply universally. I don't see this as particularly productive. In fact, I've seen it fly in the face of productivity on a few occassions.

    Just look at the differences in design between a data warehouse and an OLTP database for an example.

    SQL Server is an extremely complex product - especially 2005. I keep running into folks who find neat uses for different functionality in SQL Server - stuff I would've never thought of. All components, including stored procedures, have their place.

    :{> Andy

    Andy Leonard, Chief Data Engineer, Enterprise Data & Analytics

  • I disagree with many aspects of this post.

    First, on performance. The scenario you describe where performance is hindered by a stored procedure only occurs when different parameters result in wildly different distributions of data. As an example, say we have a stored proc which accepts a date range within which records should be fetched.

    A small date range will yield a small number of records, whereas a large date range could result in a very large number of records. The query plan for small sets of records and large sets of records is very different in terms of how joins are constructed... so if the small plan is used when a large set of data is requested, it will be inefficient.

    In the majority of cases, this does not occur. For the most part, the precompiled aspect of the stored proc will result in much faster execution time than in compiling every time. For those stored procedures which are affected by this data distribution problem, you simply set WITH RECOMPILE and the stored proc recompiles each time as a normal statement would, and the problem goes away. Overall, performance is improved the majority of times that you don't need to use this feature.

    Your next arguments about performance and cursors are irrelevant. That's an issue with coding practices not the technology. Cursors can be written anywhere.

    You have more of a point about security, I still feel more comfortable having total control over entry and exit points of the application though.

    Regarding reusability... this is an important reason to use stored procedures. Most of the projects we do have multiple sources connecting to the database, such as website, a windows service, an externally facing web service, and so on. For operations which rely on database processing, the only sensible thing to do is to make this code usable for all entry points.

    A good example of this would be using SQL encryption and opening an encryption key, reading and decrypting the data, and closing the encryption key. There is no reason why this shouldn't be done at the database level.

    Separation of duties however is the big one. This is where I very strongly disagree with you. Simply because they are not "extensible" doesn't mean that separation of duties isn't important.

    In fact, if you think of stored procedures as interfaces, the "it's not OO" argument fails immediately. The database is an object, and having direct table access to the database is the same as one object (the application) being able to dig around inside another objects internal structure and workings, a clear violation of the encapsulation principle of object-oriented programming. Using stored procedure "interfaces" to encapsulate the private data held in the database is the very essence of OO programming. It is clearly and perfectly object oriented.

    Stored procedures allow you to have a well-defined application / database layer. As an application is used over time, databases accumulate more and more data. It becomes necessary to start tweaking the database by denormalising, creating indexed views and so forth.

    With a stored procedure driven database, you can make arbitrary changes to the underlying database structure transparently. Multi-level joins can become a single indexed view. A calculated field can become a denormalised field. A single huge table can become a partitioned table. A whole series of queries computed on the fly may now be served directly from a newly created reporting database, all without your application having to know a thing about any of it.

    If you have baked all of this logic into your application code, good luck is all I can say. Even with a well architected application, the developing / testing / deploying cycle of these changes is going to be very, very expensive, with the increased risk of bugs that modifying a stable application inherently brings.

    Having the stored procedure layer to work with gives you an additional layer of abstraction and flexibility. It allows the application to communicate with an abstract datasource of arbitrary structure, so long as these interfaces exist.

    Overall I don't accept that this particular change is inevitable nor that I will "have to prepare myself for trends". The only trends I see here are cowboy application developers thinking they should have their sticky fingers deep in the database pie, and that's not a trend I ever intend on following. Note, I am primarily a developer though I am a qualified DBA as well.

  • It’s a good point, I remember from working in Oracle DB , we’d almost always rather use parametised queries in a DAL rather than stored procedures, the only exception was our accounting module, where the code was considered too complex for the average developer, and most developers there had rights to change the DAL, but not to change stored procedures. Weird setup I know.

    If you look at something like LINQ, it’s a lightweight ORM in a way, you can see MS is looking outside of stored procedures.

    Personally? I like them. It gives you a nice insight into what’s happening at a database level, i.e. database operations happen at a database level. From a performance point of view though, there are cases where constructing a statement will yield a much better performing solution than trying to write either a generic sp, or multiple sps for each use case.

    From an installation and maintenance point of view, source control for compiled code is easier than maintaining SPs, there’s a plus point. That’s just management practise, but still. Releasing a new app to a client with an installer is easier than building an installer that archives the old SPs (for uninstallation) and installs the new one. I’ve done it, and it’s possible, even easy, but it’s not completely trivial / automated, and is time consuming.

  • Technology Spechnology.

    I use Stored procs so I can encrypt them and deploy critical business logic to client sites and not have someone rip off my source code.

    I use them because they are easier (not foolproof) to secure from injection attacks and they hide the database schema from would be IP thieves.

    I use them because they are faster. One one project I did we took the processing time of the nightly data uploads using from 36 hours (using visual basic) down to 4 hours using well constructed SQL procs, platter splattering the read write functions and stuffing as much memmory as we could fit into the server.

    I use them because I can have high value (dedicated local client) and low value (web based) clients access through a single source (Stored procedures).

    I use them because they require a higher level of discipline to create, so are a good indicator of something that can be not rushed.

    Thats why people still like them, they may not be technically perfect, but they are practically great.

  • I think some comments here are missing the point.

    The question is not whether it is a good idea to use stored procedures, the question is, is it a good idea to have four completely trivial stored procs for every situation where an application needs to access the database.

    Maybe I'm old-school, but I see the completely meaningless repetition of a data structure with trivial functionality as a complete waste of time and a huge maintenance burden.

    I've always written parameterised SQL statements for simple table CRUD operations, and I've actively weeded out trivial sprocs when some contractor has used 'best practice' and incuded them.

    They are a complete waste of space.

    The only time it is excusable is when the sprocs are auto-generated.

    Of course when the operation is non-trivial, if you want wrapping for future development, then get on your sproc boots and fire away. The principle is, 'if there's a good chance you'll never need it, then don't do it until you do need it'.

    Even then, there is a good argument for doing the wrapping in your ORM layer rather than in the DB using sprocs. Depends entirely on circumstances.

    Much of your ORM layer should be autogenerated nowadays, too.

    Ben McIntyre

  • Databases crunch numbers and do set based math like theres no tomorrow - no OOP language can perform as fast doing loops.

    Hell yes! Single source for integrity and rule enforcement is where its at.

    I've looked at LINQ and at first glance my instincts are saying looks good but why do this?

    The only thing I can think is marketing gimick, and peer pressure - SQL has to be seemless with other datatypes. "Dumbass developers should only be using one access method period!"

    Danger is poorly developed database structure and relationships will cripple it.

    Hiding under a desk from SSIS Implemenation Work :crazy:

  • For trivial inserts and deletes, there can be an argument for avoiding stored procedures but, for anything more complex, there are clear performance gains. The biggest bottleneck is likely to be network bandwidth - you don't want to be downloading volumes of data just so that you can do complex joins less efficiently in your app than you could on the database server (and perhaps return a single value). One has to consider the whole system holistically although this can only be done really well by people who are multi-skilled across dba and developer roles - something of an artificial distinction anyway in my view. Neither end of the spectrum can perform well without an understanding and appreciation of the other.

  • hear, hear! I've been arguing against ritualistic sprocs for years. By generating parameterized sql on the fly using reflection, I can eliminate the vast majority of my sprocs and get CRUD access to a new table in minutes, with no discernible sacrifice in performance and no exposure to sql injection. Hey, always use the appropriate tool for the job, and always factor in what you're most comfortable with and best at. But the mindless mantra that Everyone Must write the Same Four Sprocs over and over again is counter-productive.

  • As per Microsoft documentation, SQL was not intended as an object orientated programming language, it is a procedural system that stores and moves data.

    Even though the Microsoft employee, whose name you are comfortably not mentioning, is telling people not to use stored procs, Microsoft (documentation) still advocates that it is best practice and until they say otherwise, I'll use it.

    I feel that I have much more control over what is done and how objects are accessed inside a proc.

    As for the performance benefit I agree. Level of performance is decided by how you construct your query.

    There are pros and cons to maintaining procs. The biggest pro for me is when I have used them in my SSIS packages and a change to the SQL logic is easy without affecting the SSIS packages etc.

  • I've read all the above and still come down on the side of using stored procs for as much db access as possible.

    They blackbox the schema from the client/app server/web server code, a change to the schema doesn't require a change to the client/app server/web server code. If you need to change your schema you don't need to check the non-db code. It's a good layer of isolation to have.

    DAL's are good - we use them to call the stored procedure and handle errors etc.

    If you let 'anyone' call dynamic SQL on your db then you will end up with hard to trace defects. If you only allow stored proc calls then you have a controlled interface.

    Think of it this way, when you're writing your C# classes, you don't make all your private class variables 'public static' do you? You do? Clear your desk!

    As regards dynamic SQL, do this in a stored proc as well. I tend to do this and it allows me to keep that architectural separation so that the C# code does not need to know how or where in the db the data is stored. This can be fast... I did some performance improvements on a SQL stored proc last week. It used dynamic SQL to produce a search based on the supplied criteria so the DB didn't waste time with joins and 'where' clause items that weren't required. eg



    or (@NHS_ID is null or p.nhs_id = @NHS_ID)

    or (@NI_NO is null or p.national_insurance_no = @NI_NO)


    In the generated SQL, if @NHS_ID is null, then I don't include it in the search criteria.

    Oh, and I included a few more indexes.

    Took the search from 25s to 8s. Much better but still too slow.

    Turned the whole SQL round and got it down to 60ms. That'll do.

    It turns out that doing a big where clause (as outlined above) with lots of 'or' clauses can stop the db engine working out which indexes are best to use. The best solution I found was to use a subselect union all:


    p.person_id in


    select p1.person_id from person p1 where p1.ni_no = @ni_no

    union all

    select p2.person_id from person p2 where p2.nhs_id = @nhs_id

    union all



    Have I digressed enough yet?

    Stored procs 🙂 Client Side SQL :angry:

    There is no problem so great that it can not be solved by caffeine and chocolate.
  • does anyone have a T-SQL Stored proc that is generic and does CRUD operations. (not so much read but insert and update).

    i.e. supply a an operation param, a tableName param, a field list, a value list and a where clause?

    The stored proc then builds the CRUD SQL and executes it.

    Is this a good or bad way to do things?

    Is this the nirvana we want?


    Hiding under a desk from SSIS Implemenation Work :crazy:

  • I have also experienced cases where the exact same dynamic SQL query is much faster than the query referencing variables straight out.

  • CREATE PROCEDURE dbo.Do_Update @TableName VarChar(128),

    @Action TinyInt,

    @key VarChar(255),

    @Data VarChar(2056),

    @Validator Integer,

    @UpdatedBy nVarChar(128),

    @Result VarChar(255) OUTPUT



    DECLARE @StartTranCount AS Integer

    DECLARE @ReturnValue AS Integer

    DECLARE @Exception AS Integer

    DECLARE @RowCount AS Integer

    DECLARE @HostName AS nChar(128)

    DECLARE @Message AS VarChar(256)

    DECLARE @SQLStatement nVarChar(4000)

    SET @StartTranCount = @@TRANCOUNT

    SET @Result = 'Update failed : Reason unknown'

    SET @ReturnValue = -99

    SET @Exception = 0

    CREATE TABLE #sp_Who (DPID SmallInt,

    ECID SmallInt,

    Status nChar(30),

    LogiName nChar(128),

    HostName nChar(128),

    Blk Char(5),

    DBName nChar(128),

    Cmd nChar(16))

    INSERT INTO #sp_Who EXECUTE sp_Who

    SELECT @HostName = HostName FROM #sp_Who WHERE DPID = @@SPID

    DROP TABLE #sp_Who


    SAVE TRANSACTION UPD -- This creates a savepoint for rollback in the exceptions handler without decrementing trancount to zero (which is a problem for the test harness)

    IF NOT @Validator =


    SET @Result = 'Update failed : Validation exception'

    RAISERROR (@Result, 16, 1) WITH LOG

    GOTO ExceptionHandler


    IF 'ExceptionLog' LIKE '%' + @TableName + '%' -- Don't let anyone update my audit trail !


    SET @Result = 'Update failed : Validation exception'

    RAISERROR (@Result, 16, 1) WITH LOG


    IF @Action = 1-- Delete


    SET @SQLStatement = 'DELETE FROM ' + @TableName + ' WHERE ' + @key

    SET @Result = LEFT(@UpdatedBy + ' : Deleted ' + @TableName + ' WHERE ' + @key, 512)


    IF @Action = 2-- Amend


    SET @SQLStatement = 'UPDATE ' + @TableName + ' SET ' + @Data + ' WHERE ' + @key

    SET @Result = LEFT(@UpdatedBy + ' : Amended ' + @TableName + ' WHERE ' + @key + ' WITH ' + @Data, 512)


    IF @Action = 3-- Insert


    SET @SQLStatement = 'INSERT INTO ' + @TableName + ' ' + @Data

    SET @Result = LEFT(@UpdatedBy + ' : Inserted ' + @TableName + @Data, 512)


    EXEC @SQLStatement

    SELECT @Exception = @@Error, @RowCount = @@ROWCOUNT

    IF NOT @Exception = 0

    GOTO ExceptionHandler

    IF @RowCount = 0


    SET @Result = 'Update failed : Existing data not updated'

    RAISERROR (@Result, 16, 1) WITH LOG

    GOTO ExceptionHandler


    INSERT INTO dbo.ExceptionLog

    (HostName, StoredProcedure, ReturnValue, ReturnMessage)

    VALUES (@HostName, 'Do_Update', 0, @Result)


    SET @Result = ''

    SET @ReturnValue = 0

    RETURN @ReturnValue


    IF @Exception > 0

    SELECT @Result = 'Update failed : ' + Description

    FROM Master.dbo.SysMessages

    WHERE Error = @Exception

    IF @@TRANCOUNT > @StartTranCount


    ROLLBACK TRANSACTION UPD -- Rolls back to the save point but does not decrement @@TRANCOUNT

    COMMIT TRANSACTION UPD -- Just decrements @@TRANCOUNT by 1


    INSERT INTO dbo.ExceptionLog

    (HostName, StoredProcedure, ReturnValue, ReturnMessage)

    VALUES (@HostName, 'Do_Update', @ReturnValue, @Result)

    RETURN @ReturnValue


  • Wow! nice fast reply - I see it. 😉

    But is it what is required?

    Is it good enough to replace the C(R)UD stored procs for every table aproach?

    What are the disadvantages of doing it this way? Are there any?


    Hiding under a desk from SSIS Implemenation Work :crazy:

Viewing 15 posts - 1 through 15 (of 160 total)

You must be logged in to reply to this topic. Login to reply