I’ve grown up reading Tom Clancy and probably most of you have at least seen Red October, so this book caught my eye when browsing used books for a recent trip. It’s a fairly human look at what’s involved in sailing on a Trident missile submarine…
There seems to be two trains of thought and I think this is mainly down to who and where your developers are. The first is that a stored procedure or a function is a great place to put all the business logic that an application needs. The second is that no logic should be in code deployed to SQL Server at all.
You can see these two extremes if you compare the difference between what stack overflow does (.Net/SQL Server BUT highly optimized for performance and deployments) where they have no stored procedures and embed T-SQL directly in their code to the many many thousand line procedures that we see on a daily basis, this quora answer from Kevin Kline I think is extreme but not unexpected:
The largest I ever personally wrote was a 16kb text file, which was many dozens of printed pages, and much personal misery.
Personal misery for anyone tasked with fixing that is exactly right, he goes on to say:
Otoh, I was once at the Microsoft campus in Redmund in a customer lab (what would later become the SQLCAT group) where I was shown a berserk 3rd party report writing tool which would routinely create individual stored procedures over over 30mb in size. The largest I saw at that time was 38mb.
38mb stored procedure, that is 39,845,888 characters, where do you even start with that, will SSMS even open it?
If we take those as the two extremes stack overflow and 38mb procedures then somewhere between those two sit most of the stored procedures in production today.
When we look at the T-SQL development landscape today we have the ability to write using a development IDE:
- Visual Studio SSDT
- SSMS with SQL Prompt
We have the ability to unit test code:
- ssdt tests
If we use tSQLt then we can mock tables and procedures!
We can measure code coverage (https://the.agilesql.club/blogs/Ed-Elliott/2016-04-08/SQLCover-Code-Cove…).
When we are ready to deploy we are literally spoiled for choice at ways to do deployments:
- Redgate SQL Compare
- Redgate ReadyRoll
- SSDT (Yeah!)
- blah blah blah
So we really have this ability to start to write and maintain code in SQL Server but the question is, should we?.
I personally believe there are two factors that we need to understand:
- Maintaining existing applications
- Data locality
So in reverse order there is “Data Locality” this is the idea that you should work on your data where it is located, so if you want to get a total of a specific customers transactions do you really want to pull every record back to your app and then calculate the value or do you want to run a query or a set of queries that does the work for you?
If you have come to the decision that you want to do the calculation where the data is located then do you write in-line (or bleurgh generated) code to do it or do you want to write a stored procedure? The correct answer is to write a stored procedure.
Stackoverflow doesn’t use stored procedures so we don’t need to
Stackoverflow is a specific use case and they decided to use .Net so they have a specific set of problems to deal with in terms of performance. They deploy (as I understand it) 10 times a day so if they need to change a query then they can quickly and easily – how quickly can you modify code and get it to production to fix a problem causing downtime on your mission critical app written in powerbuilder 20 years ago? (I jest but you get the point)
Why is writing a stored procedure the correct answer?
When I say stored procedure I mean, stored procedure, function, view, computed columns, etc, something that runs where the data is located.
If we look at the first factor to understand about writing business logic, “Maintaining existing applications” then we need to understand that there are many reasons why you might not be able to modify the code and you need to change the database and it boils down to the fact that:
- A compiled application does not change
- The volumne of data in a database does change
As much as I would like to, sometimes you are dealt a crappy hand with a database you have to support and if the database is used, over time as more data is added it is unavoidable that the performance behaviour will change.
If the code is available to change without re-compiling the application then you can adjust to support the different data volumes and characteristics.
So you should put all your business logic in stored procedures?
No, personally I believe that you should only put in the code that must be run locally to where your data is. When you do put logic in your stored procedures then it should have unit tests (tSQLt really helps here).
When else should you have business logic in your stored procedures?
You shouldn’t but as I said sometimes you are dealt a bad hand and are stuck with business logic (vendor app, old unmaintained, new maintained badly architected apps) – when this happens then make sure you use the tools available to you to help maintain and make changes. Make sure that you:
- Use an IDE and refactor
- Use source control
- Write unit tests (tSQLt)
- Live the dream
Business logic is often complex and better modelled in a language that handles that better – T-SQL is about interacting with data, how to store it, how to read it, how to change it in an efficient manor. T-SQL isn’t about modelling objects (or functions) it is about data. Applications use data to do things with.
Keep your data locality in mind, think about performance but try to keep business logic out of stored procedures. Where you cannot avoid business logic in stored procedures use tools and processes to help.