One size does not fill all

  • Well said. But, in the author's defense, the argument has been around for a long time, it's not a new argument. And, every SQL Server application, regardless of the way the SQL is generated and communicated, will have to observe best practices that SQL requires for optimization. Example, non-sargable predicates aren't a good idea, query using an index instead, avoid table scans, don't join 192 tables (I love that example!), etc... You either agree to do that in your data access layer, or you do not. If your data access layer is SQL Server, then you can still make all these mistakes, they don't just suddenly go away. It all depends on what the demand is and how you intend to architect your query strategy against the database to serve the requirement. If you are a store front application that's a completely different requirement than a reporting application per se, etc.. etc..

    Great discussion!

  • dmbaker (3/6/2014)


    Eric M Russell (3/6/2014)


    It's true that by coding batch processing logic within the application, your SQL code can be simplified and constructed in a more cross-platform fashion. Basic SELECT, INSERT, and UPDATE operations can be coded using ANSI compliant syntax, using none of the T-SQL and PL/SQL extensions.

    However, object relational impedance mismatch, that thingy for which some application developers fret about when accessing tables directly; it is actually mitigated when parameter driven stored procedures perform batch processing on the database server side.

    Also, client side cursors are still cursors, and locks can be held open for longer periods of time. From an architentural perspective, streaming your data across the network, serializing it into objects, processing it one record at a time, and then streaming it back across the network to the database server can have performance ramifications. Databases typically fail because they failed to scale, not because they failed to port.

    I really wasn't talking about trying to code generic "cross-platform" SQL. Personally, I think that's an unrealistic pipe dream (you generally end up coding to the least common denominator and though your code may work, it'll never work as well as it could work).

    Can you elaborate more on how stored procedures mitigate "impedance mismatch"? I've not really found that they have anything to do with the matter personally (although they can certainly be a player in solving the problem), but I'd like to better understand what you mean in that regard.

    The latest version of ANSI SQL supports window (ranking), analytical, and temporal functions now, so it has most everything needed for advanced aggregating and filtering of data on the server side. If the dataset is then brought down into the application tier, then looping and error handing constructs, pattern matching functions, etc. can all be implemented programatically. I've written applications that could work heterogeneously with SQL Server, Oracle, or MS Access database; although it did require some minor platform specific token substitution in some cases.

    Even going forward, I wouldn't dismiss this embedded SQL architectural approach. It's more appropriate for 3rd party ISV type applications that fit into the simplistic select customer, edit customer, save customer model. On the other hand, it's very inappropriate for business intelligence (large datasets) or high volume transaction processing, and that's the type of environemnt I typically work in.

    As for "impedance mismatch"; it's not that stored procedures prevent the issue, it's that the stored procedure becomes an abstraction layer for application developers who don't know, or would prefer not to know, the internals of the database model or working with SQL and relational tables in general. For example, I've written stored procedures that accept an XML document as an input parameter and also returned resultsets as XML. The responsibility of persisting object data is shifted to the database developer or DBA who are more experienced with the database model. Also, if the data model changes, the required coding changes are confined within the stored procedure and kept isolated from the application.

    Personally, I've never considered impedance mismatch to be that big of a deal. Then again, it's much easier for me to make the required SQL changes since I'm the one who also changed the database schema, so I know what needs to be done.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Agree with most of all said. I've seen alot of .NET code that produces TSQL stuff that is unreadable. Looking at the execution plan could take a day. One slick app we have at the hospital I work for is a room/bed app for the cleaning folks. A 400 bed database produces 10,000 page reads that take 500 millisecs. The query is unreadable with unions and subqueries galore.

  • Eric M Russell (3/6/2014)


    dmbaker (3/6/2014)


    Eric M Russell (3/6/2014)


    It's true that by coding batch processing logic within the application, your SQL code can be simplified and constructed in a more cross-platform fashion. Basic SELECT, INSERT, and UPDATE operations can be coded using ANSI compliant syntax, using none of the T-SQL and PL/SQL extensions.

    However, object relational impedance mismatch, that thingy for which some application developers fret about when accessing tables directly; it is actually mitigated when parameter driven stored procedures perform batch processing on the database server side.

    Also, client side cursors are still cursors, and locks can be held open for longer periods of time. From an architentural perspective, streaming your data across the network, serializing it into objects, processing it one record at a time, and then streaming it back across the network to the database server can have performance ramifications. Databases typically fail because they failed to scale, not because they failed to port.

    I really wasn't talking about trying to code generic "cross-platform" SQL. Personally, I think that's an unrealistic pipe dream (you generally end up coding to the least common denominator and though your code may work, it'll never work as well as it could work).

    Can you elaborate more on how stored procedures mitigate "impedance mismatch"? I've not really found that they have anything to do with the matter personally (although they can certainly be a player in solving the problem), but I'd like to better understand what you mean in that regard.

    As for "impedance mismatch"; it's not that stored procedures prevent the issue, it's that the stored procedure becomes an abstraction layer for application developers who don't know, or would prefer not to know, the internals of the database model or working with SQL and relational tables in general. For example, I've written stored procedures that accept an XML document as an input parameter and also returned resultsets as XML. The responsibility of persisting object data is shifted to the database developer or DBA who are more experienced with the database model. Also, if the data model changes, the required coding changes are confined within the stored procedure and kept isolated from the application.

    .

    Are you talking about, for example, an application sending to a stored proc an XML-serialized representation of an object?

    It doesn't sound like that's what you're talking about here, but maybe it is. If so, then that would indeed be one way to solve the mismatch problem as your database code would then be able to translate the object into its corresponding relation representation. But for a large object set or a complicated object graph it seems it could potentially generate a whole lot of ugly SQL code, and you're not really isolating the programmers from the details of the database platform (be it SQL Server, Oracle, or whatever).

    Anyway, where you choose to put the abstraction layer is your business and you know best the problems you are trying to solve, and if what you're doing works for your situation then that's great! There's no "silver bullet", but I think that the real win is that "abstraction layer" and the interface to it. You have that, so you're well ahead of the game in that regard.

  • dmbaker (3/6/2014)


    Eric M Russell (3/6/2014)


    dmbaker (3/6/2014)


    Eric M Russell (3/6/2014)


    It's true that by coding batch processing logic within the application, your SQL code can be simplified and constructed in a more cross-platform fashion. Basic SELECT, INSERT, and UPDATE operations can be coded using ANSI compliant syntax, using none of the T-SQL and PL/SQL extensions.

    However, object relational impedance mismatch, that thingy for which some application developers fret about when accessing tables directly; it is actually mitigated when parameter driven stored procedures perform batch processing on the database server side.

    Also, client side cursors are still cursors, and locks can be held open for longer periods of time. From an architentural perspective, streaming your data across the network, serializing it into objects, processing it one record at a time, and then streaming it back across the network to the database server can have performance ramifications. Databases typically fail because they failed to scale, not because they failed to port.

    I really wasn't talking about trying to code generic "cross-platform" SQL. Personally, I think that's an unrealistic pipe dream (you generally end up coding to the least common denominator and though your code may work, it'll never work as well as it could work).

    Can you elaborate more on how stored procedures mitigate "impedance mismatch"? I've not really found that they have anything to do with the matter personally (although they can certainly be a player in solving the problem), but I'd like to better understand what you mean in that regard.

    As for "impedance mismatch"; it's not that stored procedures prevent the issue, it's that the stored procedure becomes an abstraction layer for application developers who don't know, or would prefer not to know, the internals of the database model or working with SQL and relational tables in general. For example, I've written stored procedures that accept an XML document as an input parameter and also returned resultsets as XML. The responsibility of persisting object data is shifted to the database developer or DBA who are more experienced with the database model. Also, if the data model changes, the required coding changes are confined within the stored procedure and kept isolated from the application.

    .

    Are you talking about, for example, an application sending to a stored proc an XML-serialized representation of an object?

    It doesn't sound like that's what you're talking about here, but maybe it is. If so, then that would indeed be one way to solve the mismatch problem as your database code would then be able to translate the object into its corresponding relation representation. But for a large object set or a complicated object graph it seems it could potentially generate a whole lot of ugly SQL code, and you're not really isolating the programmers from the details of the database platform (be it SQL Server, Oracle, or whatever).

    Anyway, where you choose to put the abstraction layer is your business and you know best the problems you are trying to solve, and if what you're doing works for your situation then that's great! There's no "silver bullet", but I think that the real win is that "abstraction layer" and the interface to it. You have that, so you're well ahead of the game in that regard.

    Yes, I'm talking about an application sending the stored procedure an XML-serialized representation of an object. From the application programmer's point of view, there is no "impedance mismatch", because they are isolated from the implementation details of the database schema and SQL syntax. The burden of parsing the object model from XML and persisting it to relational tables is my task within the stored procedure. T-SQL versus PL/SQL have significantly different methods of handling XML parsing, but porting code between the two is fairly straightforward. I'd rather maintain the object persistence code in the database myself than constantly explain to the application team why the database model changed and apologize for any coding changes it may have caused them. Computers don't complain so long as the syntax is correct, and computers are more forgiving then people.

    The method used by the application programmer to call the stored procedure is slightly different depending on whether the database platform is SQL Server or Oracle, but how different depends on the application programmer, like which provider library they chose to use. However, it's basically just making a single procedure call with a single XML input parameter. From their perspective, wether the backend database platform is SQL Server versus Oracle is a simple switch in a configuration file.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • The emergence of hosted solutions will put this debate to bed. Only clients that can afford to waste money will opt for inefficient techniques. Not writing stored procedures in tsql when using a SQL Server is just plain ridiculous. Those of you who share in this opinion will reap the rewards of being able to charge for the privilege of doing something right.

  • A lot of this discussion appears to be based upon a single application utilising a single database. It often starts that way, however, that can easily change.

    This leads neatly to another point: application logic does not necessarily equal business logic. chris.bargh talks about business logic being best placed in the application and, on this point, I would have to disagree. Application logic lives in the application, however, business logic that requires to be enforced across the business must exist in a way that can be defined once and enforced everywhere. This can be achieved in a number of different ways of which stored procedures, reusable assemblies and SOA styled services are just some of the solutions available to us.

    Some applications require supporting different databases, particularly if they are products. Clients may mandate use of a particular RDBMS and, of course, that leads to the likelihood of different clients having different requirements. The solution that Tom Bakerman suggests is one that I have commonly applied: applying a Dependency Injection/Inversion of Control pattern toy can have a single data service that employs separate, dynamically loaded, data providers each of which target a different RDBMS.

    Also there are many things that an ORM cannot know (as they stand today) like what can be cached and the context in which they are called in etc.

    Perhaps the best solution is mixing ORMs and SPs as mentioned by call.copse but, of course, the mix would be from 0%/100% to 100%/0% and anywhere in between. As always, we have to make a judgement call for each case.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • I normally agree with (and am interested in) most of what is written in these blogs but in this instance this is not the case, to the extent that I felt I should respond.

    As an architect, as well as a developer, I see significant value in having our applications interoperate with a number of different DBMS types; indeed, it is sometimes a design imperative. In my experience, keeping the interface to them all as generic as possible greatly simplifies the ability to achieve this.

    I don't disagree with you in general. If the app is small scale and you just want to fill some drop down boxes and grids, then sure, use something like EF. But once you start doing real work with the database, then that is when you need to carefully consider how you are accessing and using the database.

    If you have an application that is going to be heavily used and do a lot of data work, then if you don't design it to use the advantages of the DBMS of choice, you are shooting yourself in the foot right at that starting gate. I have worked with large systems that were not optimized for any DBMS and it showed. It was slow and it was painful. It used no stored procedures at all and it was a nightmare. It could run on MS SQL or MySQL or Oracle, but it ran horribly on MS SQL and MySQL and Oracle.

    That is my main point, you have to stop and consider what is best.

  • Microsoft Access anyone??? :unsure:

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Gary Varga (3/7/2014)


    Microsoft Access anyone??? :unsure:

    In the case of MS Access, the application forms, reports, business logic, SQL, and database are all intertwined in one file. I guess that works if the programmer, DBA, QA, team lead, and end user all happen to be the same person too.

    🙂

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • While I would almost never want to use code generated by some tool, I would also say stored procedures are not the silver bullet for everything or everybody. I think we have all seen code written by other developers that may look just as bad.:-)

    -------------------------------------------------------------
    we travel not to escape life but for life not to escape us
    Don't fear failure, fear regret.

Viewing 11 posts - 31 through 40 (of 40 total)

You must be logged in to reply to this topic. Login to reply