Your Thoughts On In-line SQL

  • bkubicek - Friday, March 16, 2018 6:27 AM

    n.ryan - Friday, March 16, 2018 3:32 AM

    There are definitely occasions where stored procedures really don't cut it because the logic involved in the query is much better handled in an environment that can deal with the logic better. These are fairly rare and it's usually where the query needs to be altered depending on the inputs provided. While it is possible to have multiple stored procedures and for the application to choose which one to use the number of possible permutations can make this a poor option as can query efficiency options. In any case when I come across a dynamically created query and it is not executed as a paramaterised query I will go postal on the developer - there is no excuse for allowing a route for SQL injection attacks into a system when the fix was developed a couple of decades ago.

    An advantage of application defined queries is that all of the application logic is within the same environment and the same version control system. This can be particularly important because version controlling (and just identifying the version of a stored procedure or function) is thoroughly painful in MS-SQL. Expanding version control systems and application deployment systems to include other environments is an important component of the application process but in general is woefully behind requirements without manual intervention added.

    On the other hand, as soon as these stored procedures (and udfs) are considered an API then there are very strong arguments to keeping such logic closer to the database where changes to the database structure can be abstracted away from the API to some extent and the efficiency of the stored procedures and functions can be dealt with closer to the source.

    I'd tend towards the stored procedure and function route where possible even though the application deployment and version control is considerably harder. Like much of life, and development, there is no 100% rule though.

    Not sure what you are using for your development environment, but at my company we are using visual studio.  We have encouraged all our developers to use sql database projects, so all the functions and stored procs are part of the solution and then everything is under source control.  It is sort of funny because a lot of people have never even heard of a sql database project in visual studio.  I wrote a little app that will grab all the scripts and put then into one script so they can easily all be executed at once.  Anyway, that is how we handle having everything together in one place instead of in-line sql.

    Ben

    The problem with that approach is that it (sort of) works for DDL (where theoretically the order doesn't matter, except when it does), but fails hopelessly at DML. And all too often the two end up needing to go hand in hand.

  • andycadley - Friday, March 16, 2018 6:42 AM

    bkubicek - Friday, March 16, 2018 6:27 AM

    n.ryan - Friday, March 16, 2018 3:32 AM

    There are definitely occasions where stored procedures really don't cut it because the logic involved in the query is much better handled in an environment that can deal with the logic better. These are fairly rare and it's usually where the query needs to be altered depending on the inputs provided. While it is possible to have multiple stored procedures and for the application to choose which one to use the number of possible permutations can make this a poor option as can query efficiency options. In any case when I come across a dynamically created query and it is not executed as a paramaterised query I will go postal on the developer - there is no excuse for allowing a route for SQL injection attacks into a system when the fix was developed a couple of decades ago.

    An advantage of application defined queries is that all of the application logic is within the same environment and the same version control system. This can be particularly important because version controlling (and just identifying the version of a stored procedure or function) is thoroughly painful in MS-SQL. Expanding version control systems and application deployment systems to include other environments is an important component of the application process but in general is woefully behind requirements without manual intervention added.

    On the other hand, as soon as these stored procedures (and udfs) are considered an API then there are very strong arguments to keeping such logic closer to the database where changes to the database structure can be abstracted away from the API to some extent and the efficiency of the stored procedures and functions can be dealt with closer to the source.

    I'd tend towards the stored procedure and function route where possible even though the application deployment and version control is considerably harder. Like much of life, and development, there is no 100% rule though.

    Not sure what you are using for your development environment, but at my company we are using visual studio.  We have encouraged all our developers to use sql database projects, so all the functions and stored procs are part of the solution and then everything is under source control.  It is sort of funny because a lot of people have never even heard of a sql database project in visual studio.  I wrote a little app that will grab all the scripts and put then into one script so they can easily all be executed at once.  Anyway, that is how we handle having everything together in one place instead of in-line sql.

    Ben

    The problem with that approach is that it (sort of) works for DDL (where theoretically the order doesn't matter, except when it does), but fails hopelessly at DML. And all too often the two end up needing to go hand in hand.

    Actually, it works quite well.  We have an initial create script.  Every part of the script checks to see if things already exist.  Schema is always first then tables indexes etc.  Then we have an update script.  Again the update script is written so that any column updates or added columns only happen if they haven't already.  So that keeps the tables up to date.  Then all the functions are added.  These all have drop statements if they exist, so they are always created fresh.  Finally the stored procs again with if exists then drop at the start of each stored proc.  So we end up with a nice build script that works.

    When we have sub stored procedures that other stored procs are dependent on, you will get a warning the first time the script is run, but it still runs and does not error out.

    Honestly, this has worked quite well for us we have not had any issues and we have some pretty complex systems that are using it.  The stored procs are not all just basic CRUD, but there is a lot more going on.

    Ben

  • dmbaker - Friday, March 16, 2018 6:36 AM

    chrisn-585491 - Friday, March 16, 2018 6:19 AM

    ORMs are a special case. I'd prefer not to give up stored procedures just to make the ORM or project manager happy. 

    ORMs do not necessarily preclude the use of stored procedures. I'm using stored procedures (for CUD) and views exclusively in my Entity Framework projects. Works just fine so far, and gives me better control over things. I'm not sure if it'll work for everything I do, but so far it does.

    Absolutely, SPs can be imported very easily with EF. Project Managers can be made unhappy in other ways, like telling them you have uninstalled Excel and so can't read their status report.

  • Since I deal exclusively with OLTP I'm a big fan of stored procedures ONLY.

    Inline SQL opens the door for all kinds of security problems. SPs can be tailored to return exactly what you want, only require a single permission, and are thus extremely precise.

    On the other hand, in-line SQL requires (shudder) table permissions and that quickly turns into a quagmire of infinite permission permutations especially with a large number of roles (we have 32!). Not to mention it allows a bad guy with a stolen account to use SSMS for fairly unrestricted read access and schema research. Let's not make it too easy for the bad guys, hmm?

    As for queries with complex and variable WHERE clauses (such as reports) I got around that by clever use of SPs that A) use dynamic SQL (defines everything but the WHERE clause as an unchanging statement). The WHERE clause is created by a function that constructs it from data stored in tables with validated arguments.

    I used to use MS Access for development which didn't have SPs. Pretty much everything was in-line SQL or saved queries (equivalent to Views). It was convenient, but required table access and so wasn't nearly as secure since controlling unintended access was a lot harder.

    Bottom line, even though you need like 10 SPs per table for CRUD/CRUDE (CRUD + multiple kinds of reads), plus SPs for doing the actual work of the app, IMO they are a far better alternative to in-line SQL.

  • I'm in the stored procedure camp too, however I would like to see SQL Server adopt the concept of packages in the same way as Oracle to reduce the overall complexity of managing such a large set of procedures and functions within the database, I think many DBA's would benefit from this. I don't believe that a .NET assembly or SSIS package are the answer either as they rely on external features not always available

  • I prefer to use views over stored procedures when retrieving data. Number of returned rows, simple filters and sorting belongs outside the data-storage/validation realm in my opinion. That said, I would not sprinkle sql statements in my applications either. I believe those belong in the sql-specific area of the DAL.

  • While I'm usually a proponent of 'never say never', I have to come down heavily against the use of inline SQL in front-end app development.  At one point a number of yeas ago I was a DBA/SQL Developer for a large project that involved several user applications and hundreds of SQL stored procedures.  This experience taught me that there are entirely different skills required on each side.  At that point I had not done front-end development for a number of yeas, and would not have claimed to have the appropriate skills any more to do that.  However, I was always learning and developing new skills in the SQL development.  Aiding the developers in their work by using our skills in SQL was enough of  challenge without having SQL buried in application code that I had little or no capability in using.  The largest problem by far was the lack of graceful error handling at the SQL level in code by developers. who were more concerned with simply getting data returned to their code and sending data back to the back end.  While this is a normal situation, it's not the best overall for performance and data accuracy.  Couple this with the near impossibility of getting bug fixes approved for implementation by project management and you do not have a good situation for users OR data. 

    Rick
    Disaster Recovery = Backup ( Backup ( Your Backup ) )

  • I use inline SQL because our loan servicing system is hosted and I have only read access to the database outside of the application.  So if I need to query in my application and the data needs to be real-time I have to use inline sql.  If not real-time I have a database where I pull copies of the tables on a nightly basis and use them for this data where I do have control of the database and can create stored procedures.

  • bkubicek - Friday, March 16, 2018 6:48 AM

    andycadley - Friday, March 16, 2018 6:42 AM

    The problem with that approach is that it (sort of) works for DDL (where theoretically the order doesn't matter, except when it does), but fails hopelessly at DML. And all too often the two end up needing to go hand in hand.

    Actually, it works quite well.  We have an initial create script.  Every part of the script checks to see if things already exist.  Schema is always first then tables indexes etc.  Then we have an update script.  Again the update script is written so that any column updates or added columns only happen if they haven't already.  So that keeps the tables up to date.  Then all the functions are added.  These all have drop statements if they exist, so they are always created fresh.  Finally the stored procs again with if exists then drop at the start of each stored proc.  So we end up with a nice build script that works.

    When we have sub stored procedures that other stored procs are dependent on, you will get a warning the first time the script is run, but it still runs and does not error out.

    Honestly, this has worked quite well for us we have not had any issues and we have some pretty complex systems that are using it.  The stored procs are not all just basic CRUD, but there is a lot more going on.

    Ben

    An SQL project in VS is an SQL project, when you have an application (web, rich client, whatever) as well then you have two different projects that need to be in lock-step in order to work. While changes can be scripted this adds an additional layer to a system and an additional place for things to go wrong - who checks and validates the scripts? How about automated testing? When these scripts also include stored procedures and functions then the level of complexity and required formal practices is added to greatly as well. If the SQL side (SPs and UDFs) is one project and the application another, how does the application validate the version of these, effectively API calls? What should the application do when presented with an unexpected version of a an API call (stored procedure or function)?

    I'm not against stored procedures and functions (quite the opposite), just that they are not a panacea to everything and the appropriate management of them adds another overhead to a project which if not planned for as an integral part of it, will cause problems.

  • I agree with the use of stored procedures for all the reasons mentioned in the article and, because I was on the standards committee, managed to get this included in the development standards manual. However, a cautionary tale; one of our more recalcitrant developers simply created a stored procedure that had a single input parameter (varchar(2000)) and executed it! His application just supplied the in-line SQL statement to the stored procedure via the parameter instead of running the statement directly. SQL injection? He'd never heard of it.

  • n.ryan - Friday, March 16, 2018 8:05 AM

    bkubicek - Friday, March 16, 2018 6:48 AM

    andycadley - Friday, March 16, 2018 6:42 AM

    The problem with that approach is that it (sort of) works for DDL (where theoretically the order doesn't matter, except when it does), but fails hopelessly at DML. And all too often the two end up needing to go hand in hand.

    Actually, it works quite well.  We have an initial create script.  Every part of the script checks to see if things already exist.  Schema is always first then tables indexes etc.  Then we have an update script.  Again the update script is written so that any column updates or added columns only happen if they haven't already.  So that keeps the tables up to date.  Then all the functions are added.  These all have drop statements if they exist, so they are always created fresh.  Finally the stored procs again with if exists then drop at the start of each stored proc.  So we end up with a nice build script that works.

    When we have sub stored procedures that other stored procs are dependent on, you will get a warning the first time the script is run, but it still runs and does not error out.

    Honestly, this has worked quite well for us we have not had any issues and we have some pretty complex systems that are using it.  The stored procs are not all just basic CRUD, but there is a lot more going on.

    Ben

    An SQL project in VS is an SQL project, when you have an application (web, rich client, whatever) as well then you have two different projects that need to be in lock-step in order to work. While changes can be scripted this adds an additional layer to a system and an additional place for things to go wrong - who checks and validates the scripts? How about automated testing? When these scripts also include stored procedures and functions then the level of complexity and required formal practices is added to greatly as well. If the SQL side (SPs and UDFs) is one project and the application another, how does the application validate the version of these, effectively API calls? What should the application do when presented with an unexpected version of a an API call (stored procedure or function)?

    I'm not against stored procedures and functions (quite the opposite), just that they are not a panacea to everything and the appropriate management of them adds another overhead to a project which if not planned for as an integral part of it, will cause problems.

    You make a good point about versioning.  In our environment we do not have multiple developers working on the same stuff at the same time.  So in general if you check something in, it should be current and ready to go.  I can see how if you have multiple developers all working on the same things at the same time you can have a bit of a mess trying to figure out which version goes with what.  In our environment we have 38 plants.  If a plant wants something new, they have to upgrade to the latest version.  Of course, there is a good amount of time where everything has to be tested in a test environment before things are ready to go to prod.  Still like I said before it is working well for us and is a lot better then some of our older apps that are still using in-line sql.  Of course, since those apps are pretty stable, there isn't a lot of motivation to remove the in-line sql or change those apps in any way.

    Ben

  • My general feeling is that it depends on how closely you are working with the application developers, but then again if you don't trust the developers to be at least some what competent you likely have bigger issues.  And from a practical stand point not all applications consume stored procedures as easily as they consume views/direct tables.  It's always seemed asinine to me to create SP's that are called by inline SQL just to do simple CRUD operations.

  • Stefan LG - Friday, March 16, 2018 12:30 AM

    The slow DBA-scenario is not really an excuse and rather points to some other organisational problem that should have been fixed earlier!

    Agreed.  Most developers do not like the process of submitting a ticket to the DBA with the code attached for documentation:

    • This takes time out of their day. 
    • This hurts the devs ego as they think they are "beneath" the DBA and begging for permission (classic DevOps vs DBA head butting).
    • Most of the time a QA process will be done by the DBA including checking the estimated execution plan.  Devs do not want their code reviewed, because they wrote it and it is the best code ever!

    The slow release excuse rings hollow.  The devs ALWAYS want immediate release when they are done with their part.  All they have to do is agree upon a standard 2-3 business day SLA for release.  If the DBA cannot meet the SLA then he\she either needs additional help on the DBA team or the individual is incompetent\lazy.

    Nearly every problem that persists in our processing is a result of inline code in an application.  Just yesterday deadlocks started popping up in one of our .NET apps.  Why?  Changes to inline sql code in the app that was not properly tested.

  • hilltx - Friday, March 16, 2018 7:42 AM

    I use inline SQL because our loan servicing system is hosted and I have only read access to the database outside of the application.  So if I need to query in my application and the data needs to be real-time I have to use inline sql.  If not real-time I have a database where I pull copies of the tables on a nightly basis and use them for this data where I do have control of the database and can create stored procedures.

    if you are a DBA/SQL Developer, I think I'd be 'movin' on' from your position.  Go somewhere where your skills are appreciated.

    Rick
    Disaster Recovery = Backup ( Backup ( Your Backup ) )

  • I also fall into the stored procedure camp. I prefer to see business logic in the data layer in stored procedures.

Viewing 15 posts - 16 through 30 (of 67 total)

You must be logged in to reply to this topic. Login to reply