• As usual, this conversation got away from my 'quick reply' ability to respond to the pieces, so my tardiness isn't ignoring it, it's wanting to make sure I had enough time to process your comments and to make sure I was clear.

    peter-757102 (5/10/2013)


    Performance is assisted because you can create better tuned (dynamic) queries without procedural overhead in the database.

    You'll have to clarify this to me, because no matter what you write, there will be procedural overhead in the database. Using dynamic SQL in this manner just means you're compiling queries more often.

    Procedural logic to make SQL code adaptive is more efficiently handled in a language suited to procedural programming.

    Adaptive SQL is not the purpose in most use cases, even small ones, in my experience. Optimized SQL is. If you don't care about optimization due to size then method can be what you like as long it's secure. Three rules, in this order: Accurate, Efficient, Flexible. Flexible must be third as the DB is the bottleneck otherwise.

    Another advantage of tight binding is that the SQL code that is written can use the same declared values for status (and other) codes as is used by the rest of the application.

    Can you clarify what you mean by this? I'm afraid whatever correllation you're trying to make is missing me. You mean constant declarations?

    There still needs to be a match between these declarations and the actual status table contents of course. But at least you do not end up with unmaintainable constants in the SQL code as you would be forced to when writing stored procs!

    How are constants any more or less maintainable when sent as a parameter to a proc or directly inserted into your sp_executesql as a parameter? This offer of preferred functionality doesn't make any sense to me.

    Firing unneeded queries happens when people write code not knowing what that code really does behind the abstractions they use. This is the danger of abstraction and misuse is promoted by constructs like LINQ and ORM's (and yes, this is true for every hidden abstraction, thus stored procedures as well). Developers are using these abstractions most often because they never learned proper SQL.

    With this, I'm in complete agreement. There's other concerns I have with ORM, like multiple instances will hold cached data that collides when multiple updates are made against the same key if instituted horribly when they flush to the db, but that's another discussion.

    Let's assume ORM and LINQ are miserable and stay on target; Directly written SQL in the code being sent as adhoc calls.

    Which is a shame as any half decent level developer should know quite a bit of advanced SQL (I intend to write a piece on that someday, also for internal use). That there exists of a lot of bad database related code in applications "out there" just has another root cause then tight binding.

    While I don't disagree with this in theory, you may be part of the 2% I know of out there. Most app developers write horrible dataset logic, and most DB Devs can't write efficiently in iterative instance languages. For the record, I am NOT one of those 2%. My app code is crap. The simple logic approaches to the desired goal are so different that it's rare to find anyone who can do both at more than the basic level well without having multiple personalities.

    Now, for advanced SQL? Most app developers are spending their lives just trying to keep up with the libraries that are being shoved down their throats in new versions and whatever they just downloaded when they googled up a solution. They don't have time to deal with the datastore (not even nearing the data engine) as long as it works. I disagree. You can have a life or you can keep yourself up to date on everything in the industry. Rarely can you do both.

    And just to be clear, I want developers to write SQL and become good at it! It promotes awareness of the data and structure they operate on and leads to better understanding, choices and in the end...better developers too. Just for this alone, schema awareness is essential and shielding the developer from it, is a very bad path to follow!

    I am more than happy to train my devs when they're interested and give them rule of thumbs to follow for a DBA to review once they're complete to double check execution plans and the like. I in no way advocate shielding my devs from the schema. Exactly the opposite. I do however grief the everloving crap out of them if they start sending down ad-hoc queries instead of stabilizing it in a procedure.

    Your argument that changing the schema is easier with a stored proc, is comparing apples to oranges. That same stored proc can be implemented effectively in a single method in the application for application specific databases. And when a schema changes significantly, the application needs to change just the same!

    Not always. I've worked many an app where it wasn't an interface change but 'plumbing repairs' when we do schema modifications that the app never had to see.

    Not to mention the advantages of having that code in the same version control as the application!

    Brakes.

    I want to make sure we're clear. There's some components of SQL Server that can act like your front end controls, and are subject to effective continuous integration and can easily be swapped between versions: Procs, Functions, Security, non-indexed views.

    There is something that CANNOT be versioned that way... the tables. Schema adjustments are massive undertakings that affect everything lying on top of them. It's like screwing with the foundation of a house instead of just moving the furniture around and maybe a wall or two. Table modifications (and indexed views) are particular versionings and need to be handled with care.

    As long as that's clear, and we're discussing that, then yes. I agree that you're looking at a single roll solution instead of a dual role solution and embedding the SQL into the app code saves you a step of organization and possible mix ups. That alone is not enough of a reason for me to advocate ad-hoc querying, which is what front end app codes that don't call procs are doing. Ad-Hoc queries, executesql parameterization or not.

    You are thinking a one size fits all thought. What i write is that only in the rare exception (for most apps) you need a strored proc for security reasons, and only then you have to write one. Then a response of "it would be already there" doesn't make much sense! How would always writing procedures be better?

    Because if you've written good SQL than wrapping it in a procedure does 3 things.

    1) Security controls already built in.

    2) Proc cache optimization

    3) Injection protection

    Good SQL will be written when sent to the server like this from the front end (to protect from injection):

    EXEC sp_executeSQL 'SELECT * FROM tbl WHERE a = @a and b=@b', @a = @localVariableA, @b-2=@localVariableB

    Which is roughly the equivalent of:

    EXEC procBlah @a = @localVariableA, @b-2=@localVariableB

    So, all you've done is wrap it as a proc call and made sure that it's independently executable of the code for logistics, security, and troubleshooting. What does leaving it in the code do besides open cans of worms noone wants?

    Evil Kraig F (5/8/2013)I have not worked with DB2. At the same time I do not get the feel you understand where I am coming from. An application consist of several layers, one is the data access layer (DAL for short) and even that can be subdivided in more layers.

    I'm familiar with N-Tier architecture. I'm also familiar with a DAL using procs and/or PL/SQL and/or Paradox and/or... to talk to the invidual systems in their particular methods but giving a consistent interface to the front end devs.

    What I am talking about that that at least the lowest layer of the DAL, there is 100% awareness of the database schema. Everything acting directly on one entity is found there, including constants related to the schema. Things spanning functional groups of entities can be found in higher DAL layers.

    If you tell me you open database transactions from this layer, which is what you're implying, I'm going to cry in public and run screaming. If you don't know the concerns there hit up google for a bit for the horror stories.

    The decoupling between schema and interface in this approach does not happen at the application/database boundary, but instead above the DAL (which resides in the application). It is the interface where you generally find things out of direct alignment with the schema. Additional models supporting those interfaces represent the solution to this disconnect and keep both model groups clean. The DAL (database or otherwise) should not be inflicted with changes every time some part of the user interface is re-arranged.

    Why would the DAL be modified if you're moving labels around? If you need new/different data, you must affect the DAL, or your DAL is horribly ineffecient. This is more theory at this point, but proc or N-Tier DAL, you must affect the DAL when you affect the expectations of the usage layer.

    We are on the same page here I think. What I mean with “richer” is handling anything beyond the capabilities of the relational model and constraints. Think of keeping columns from different entities that are somehow related, in sync. You can write triggers for this too, but as usual it depends on the specifics if that is a good solution or not.

    This is coming back to the external transactions controlling data. The reason you rely on triggers is to make sure the data is always consistent within a particular transaction. You do not want external structures controlling transactional integrity. The volume of problems that occur when a simple shared connection overloads or you glitch on communications is amazing and insanely disruptive. I've seen people fired on the spot for coding it that way simply because they forced a database reboot for multiple applications with one glitch after being warned of the concerns.

    15-20 milliseconds sounds like an awful lot to me, what I have in mind will be nowhere near even one millisecond! And the cost of parameter passing you will have is identical for stored procedure calls!

    We're fine till this... however:

    And dynamic SQL, processing stored procedures isn’t free either and generally procedural logic is not efficient in SQL.

    Okay, please elaborate. If you're going to tell me GIGO then fine, it's crap no matter where a bad dev writes it. Otherwise my eyebrows are going to climb into my hairline at that assertation and wonder if we're working with the same product.

    You seem to assume that dynamic SQL send from the application DAL is somehow not parameterized. To be clear, all parameters not explicitly configured otherwise, are parameterized in what I am describing. It is just that all the sores of coding this can be removed from the programmer without alienating him/her from the actual schema and the SQL.

    No, I'm assuming if you're parameterizing it there's no gain leaving it out of a proc except for catch-all search queries and/or dynamic report building, thus any other component of the discussion is based on ad-hoc querying.

    SSRS is awful from what I seen of it, I wouldn’t dare to sell such a solution if my life depended on it (ok, maybe I would in that extreme, but you get the idea)! It’s so disconnected from what a user really needs and totally not part of the application regular users use. It also requires its own database, backup procedures, users, has a bad interface, you name it. Nothing I would like to use as a developer either, it is just really bad all-around. In short application specific reporting functionality never needs this, let alone the dependencies that come with it.

    We have particularly different experiences with SSRS, but I'm just as comfortable embedding Crystal Reports into ASP apps as well, though I'm not up on the current versions of Crystal... haven't needed to be. I've done plenty of app specific reporting on it and have little to no issues. I also find it's very easy to train my users once in it and then let them loose once new reports become available for them. They don't need to do anything but fill in parameters and get their data.

    That is just overhead and does not allow for maintainable SQL code. It does however allow you to apply procedure lever security, which as I made clear most single purpose applications databases really do not require.

    No, I'm afraid you haven't made that clear to me. Most databases are designed for single application access. Most databases use procs to add a layer of security and troubleshooting between the front end app and the back end storage. Should a user get your front end user/password, he has free reign in the database. Period. With execution rights to procs only, he doesn't. Would they eventually get to the same result? Most likely. Security measures are time sinks to slow attackers while you trace them and either block them or apprehend them.

    As to maintainable SQL code, I'm sorry, but you've given me a chuckle with that one. I have a metric crapton of SQL code out there that I maintain, and I'm sure plenty of others do too... we maintain them just fine in procs.

    In the end it is not about what is more difficult, it is about that develops easier and is cleaner. Having access to constants in an application DAL is one benefit, and source control is another. From a developer perspective having a stored procedure to call would still mean calling a proxy method for that stored procedure, which in turn is really a proxy for the SQL batch inside the procedure. That chain is overhead and complexity. Then consider aspects like error reporting when a query fails. The application being aware of the query allows much more detailed error reporting, allowing things to be fixed faster!

    And as a developer and the guy called at 3AM when the database fails, I need a quick and simple method to determine where the failure is. The error telling me that proc kldjfalsdjfalk coughed up error XYZ from the stack means I go beat on a proc. It's a separation of church and state, if you will, and can halve your troubleshooting in a single glance/test. I believe your method ADDS complexity, not reduces it, at least past the initial development phase. It's not enough to build something, you must maintain it.

    No matter what the dev's using a proxy method, I agree. There's no reason not to use the one that adds security and ease of use.

    Error reporting from an sp_executeSQL call vs. an EXEC proc call are roughtly equivalent. What differences are you expecting there?

    Can’t say I have as I do not know of CI. I did look it up and maybe you mean CodeIgniter for PHP? I simply haven’t worked with that, and I wouldn’t as I don’t like PHP at all for various reasons. I haven’t checked the latest versions to see if they bettered their life.

    Sorry. CI = Continuous Integration


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA