Looking into Forced Parameterization

  • Also another aside going back to the difference between SIMPLE and FORCED. I set the DB to SIMPLE mode and then ran a couple of very simple queries where the only difference was the value of one parameter e.g

    SELECT * FROM BLAH WHERE NAME='Rob'

    SELECT * FROM BLAH WHERE NAME='James'

    This resulted in 2 different cached plans which appeared as adHoc without any parameterisation occurring.

    I don't know how much more simpler you could make a query and thought that SIMPLE mode would parameterize those simple queries it could however it doesn't seem to be the case from the tests I have done.

  • Yeah, it was the number of plan guides that you'd need that made me balk. You can create a plan guide for an ad hoc query, but any variation in the query, including case changes in the SQL text, can require another guide.

    As far as simple parameterization, when I was working on the book, it took a while to get queries that consistently generated parameterized plans. The only difference between what I did and what you did, was that I fully qualified the queries:

    SELECT a.*

    FROM Person.Address AS a

    WHERE a.AddressID = 52

    Interestly enough, this query, note the case change, will get recognized as being from the same plan:

    SELECT a.*

    from Person.Address AS a

    where a.AddressID = 42

    The best way to phrase it is that SQL Server is highly conservative about where it applies simple parameterization.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Novel suggestion: why don't you actually find out what is causing the problem, and then try to fix that? 😀

    Seriously, monitor the various Perf Mon counters under SQL Statistics to see the number of Compiles and Recompiles per second. Note there are also counters there for various parameterization efforts (and failures) as well.

    Also, examine the execution statistics to see if your executions are beating the heck out of the CPU. Lots of things could be causing that, including parameter sniffing/bad plans. Also excessive parallelization, poor indexing, etc.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • The page in question loads fast and runs okay 98% of the time. The query is indexed correctly and I am looking into the query plans. The problem only occurs a small percentage of the time and we haven't been able to identify what is definitley causing it although we suspect it might be down to concurrent crawlers coming and indexing every category/page/order variation on the results page and although the CPU doesn't jump so high it kills the server it does jump higher than normal.

    I can now see and prove that the query is getting recompiled for slight variations of parameters where plan re-use should benefit. From tests I have done it does increase the load times of the page in question.

    Are you saying that having a recompilation for each query wouldn't be something to resolve if possible.

    I'm sure you mentioned in another discussion the other day about increasing throughput by 30% using FORCED parameterization?

  • Rob Reid (6/10/2009)


    The page in question loads fast and runs okay 98% of the time. The query is indexed correctly and I am looking into the query plans. The problem only occurs a small percentage of the time and we haven't been able to identify what is definitley causing it although we suspect it might be down to concurrent crawlers coming and indexing every category/page/order variation on the results page and although the CPU doesn't jump so high it kills the server it does jump higher than normal.

    I can now see and prove that the query is getting recompiled for slight variations of parameters where plan re-use should benefit. From tests I have done it does increase the load times of the page in question.

    Are you saying that having a recompilation for each query wouldn't be something to resolve if possible.

    I'm sure you mentioned in another discussion the other day about increasing throughput by 30% using FORCED parameterization?

    If things run optimally 98% of the time then it isn't about comp/recomp eating CPU. It is almost certainly about 1) getting 'suboptimal' plans for given inputs, 2) other load on server spiking, 3) blocking or 4) simply a big-*** number of rows being hit. I presume you have checked and it is about item 1, or we wouldn't be discussing parameterization in the first place. 😉 Often the most effective way to 'help the optimizer' (outside of good indexes/statitistics - which are a given) in complex open-ended search/pagination stuff is dynamic sql. The engine can get a great help too actually if you set up the queries right.

    My comment about significant gains for a client with FORCED PARAMETERIZATION enabled was for VERY bad ADOc code that was executing thousands of calls per second of very similar nature (such as select value from table where employeeid = '12345', then select value from table where employeeid = '12346', etc, looping and looping instead of doing anything at all in a set-based manner). Comp and recomp per second numbers were brutal, as was the CPU hit obviously.

    Forced parameterization is not a panacea, and it WILL screw you in some cases, which is why it is off by default. Given what I have read about your problem you have much better options to pursue first.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • The server is a dedicted server with just this system running on it.

    The biggest SITE_JOBS_INDEX table has 2299 rows, the biggest SITE_CATEGORY_MATRIX 119,000.

    I have a job that monitors blocked processes and deadlocks etc and this query is not mentioned. Currently the only queries logged is a nightly index defrag job that was due to page locks on a couple of indexes (unrelated tables) not being enabled which I have just fixed now.

    I cannot without quite a lot of work rewrite the query to use sql server side dynamic sql due to reasons mentioned before which are the same as why I cannot rewrite it to use client side parameterized sql i.e the where/order clauses being cached and re-used by the web application on numerous queries on that page.

    All the CUD from the CRUD 😀 is in stored procs so benefiting from parameterization already as well as the majority of the complex R/SELECTs that the site uses. The rest of the client side SELECTS are split up between

    -simple SELECTS for lookup lists, drop downs, array pops etc

    -newer SELECTS that make use of ADO and are parameterized anyway.

    -this main results page SELECT which is built from literals, is the query hit the majority of the time especially by bots, and looks like it would benefit from parameterization.

    Thanks for your help.

    Also can you give some more examples of how turning this on will bite my *** please 🙂

  • Based on the research and testing I've done with it, it's unlikely to cause much negative impact. It's also unlikely to cause much positive impact except in special circumstances. I'm not sure you meet all the criteria for the positive impact, but it does seem to be helping this query.

    I'd do a regression test with other queries, just in case, prior to putting it into production.

    It's interesting that you have the CUD part of CRUD in procs, but not the reads. Most of the projects I'm working on these days are going the exact opposite direction. Most of the CUD is in dynamic, client side, tsql, but the reads are in stored procedures, where we can tune them.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • The main area where you can get in trouble with forced parameterization is when you have lots of calls against values that have widely disparate numbers of rows hit, especially if multiple tables are involved in joins. You wind up with the devastating nested-loop-lookup-on-a-kajillion-rows executions. allowing the use of actual literals could avoid this.

    Best,
    Kevin G. Boles
    SQL Server Consultant
    SQL MVP 2007-2012
    TheSQLGuru on googles mail service

  • An old version of the site was written that way however we moved all CUD to procs for security reasons as it meant we didn't have to grant write access to our web logon so even if someone forgot to sanitise their params correctly on a SELECT statement the DB wouldn't get affected by all the hackbots that hit our sites constantly 24 7. We had one episode where another developer forgot to check that a value from a listbox on a news article filter page was an integer and uploaded the file to our live site without testing and within minutes a hackbot had found the hole in the SELECT and inserted thousands of SCRIPT links to dodgy sites.

    That was a fun morning 🙂

    Also alot of the Save processes are not simple INSERT or UPDATEs and numerous tables have to be checked, affected, rolled back etc so its easier to read from a development point of view when contained within a proc rather than client side code.

    I'm not saying its the right way to go and if the site had been written in .NET it may have had lots of updatable data grids everywhere etc. However its one of those sites thats gotten so big the cost/time to re-develop in a newer technology is probably too much when compared with just trying to correct the parts that need attention. All in all the site is secure, works well and has lots of users and its easy to set up new systems there are just a few parts that could do with some more tuning etc.

Viewing 9 posts - 16 through 23 (of 23 total)

You must be logged in to reply to this topic. Login to reply