Fun with sp_executeSQL

  • Comments posted to this topic are about the item Fun with sp_executeSQL

  • Excellent evidence-based article. More please. 🙂

  • Nice article .:-)

  • Thanks David, useful article.

  • Nice article, but I'm missing a hint to adhoc queries and their influence on the proc cache. You can save a lot of memory by using the optimize for ad hoc workloads Option.

    http://msdn.microsoft.com/en-us/library/cc645587.aspx

    For more details see this blog entry from Kimberly Tripp:

    http://sqlskills.com/BLOGS/KIMBERLY/post/Procedure-cache-and-optimizing-for-adhoc-workloads.aspx

    Have a nice day,Christoph

  • On your point regarding variable length parameters, I've seen Subsonic (ORM) do this behavior too. I figured it what was happening when I started digging into why my proc cache was 6GB. My devs changed their code to call procs (fixed-length parameters) and cut the proc cache down to 2.5GB. Happy day, my data cache hit ratio improved nicely with just a little bit of work.

  • Great article David. Thank you.

    I thought that there were limits imposed on the various memory components based on on the total amount of memory given to SQL Server.

    For example: Proc Cache limits

    SQL Server 2005 SP2

    75% of server memory from 0-4GB +

    10% of server memory from 4Gb-64GB +

    5% of server memory > 64GB

    So 14GB given to a SQL SERVER 2005 SP2 would yield a 4GB Proc Cache.

    Is it correct in stating that the limits do not exist and proc cache for example can consume additional memory?

    Thanks

    Mark

  • Christoph Muthmann (1/31/2012)


    Nice article, but I'm missing a hint to adhoc queries and their influence on the proc cache. You can save a lot of memory by using the optimize for ad hoc workloads Option.

    http://msdn.microsoft.com/en-us/library/cc645587.aspx

    Good catch, wish I'd known this 2 years ago! I've passed this one on to my colleagues.

    The worry now is that I'm going to hear "we don't have to fix this because there is a DB option".:crazy:

  • Mark, I haven't come across any predefined limits for proc cache but that doesn't mean to say they aren't there.

    Time for further research methinks!

  • David.Poole (1/31/2012)


    Christoph Muthmann (1/31/2012)


    Nice article, but I'm missing a hint to adhoc queries and their influence on the proc cache. You can save a lot of memory by using the optimize for ad hoc workloads Option.

    http://msdn.microsoft.com/en-us/library/cc645587.aspx

    Good catch, wish I'd known this 2 years ago! I've passed this one on to my colleagues.

    The worry now is that I'm going to hear "we don't have to fix this because there is a DB option".:crazy:

    The only thing with that is you're still paying the cost to the query optimizer for each new compile. It's just taking less space in memory unless it's called again.

    It's worth mentioning that you see the same capitalization behavior if you start using plan guides. It has to do with the fact that a hash of the query is created first and since there's potential for SQL to be capitalization sensitive it doesn't muck with capitalization before doing the hash.

  • Great article David. I was going to ask the same question about proc cache limits, but I have heard about the "stolen pages" concept too. So, it might be that there is a limit, but still keeping the proc cache lower does give data cache more room.

    I think the option to "optimize for adhoc workloads" will reduce the proc cache as I believe it stubs out the entry in the cache but doesn't actually take up all the space until the usecount moves to 2. That keeps all those "1 timers" from filling it up.

    Entity Framework has exhibited some of the same problems as nHibernate. Good news, they have fixed a number of them in the latest version, but what might be worth researching is index usage. When the parameter datatype given doesn't match the datatype of the column that it's being compared to, I am pretty sure Sql Server gives up on obvious index options. So be careful what datatypes the developers are giving the params.

  • Post deleted.

  • pbarbin (1/31/2012)


    ...what might be worth researching is index usage. When the parameter datatype given doesn't match the datatype of the column that it's being compared to, I am pretty sure Sql Server gives up on obvious index options. So be careful what datatypes the developers are giving the params.

    That's due to implicit conversion and whether or not it results in an index not being properly used has to do with data type precedence. If the index is on something lower on the list (such as varchar) and you pass in something higher (such as int) then SQL is going to convert the value for every row from varchar to int before doing the comparison resulting in a scan. If it's the other way around (index on int and you're passing in varchar) then the implicit conversion will happen on the parameter and you can still get a seek. However, it is best to make sure the data types match so you don't need to worry about implicit conversions at all.

  • ganci.mark (1/31/2012)


    Great article David. Thank you.

    I thought that there were limits imposed on the various memory components based on on the total amount of memory given to SQL Server.

    For example: Proc Cache limits

    SQL Server 2005 SP2

    75% of server memory from 0-4GB +

    10% of server memory from 4Gb-64GB +

    5% of server memory > 64GB

    So 14GB given to a SQL SERVER 2005 SP2 would yield a 4GB Proc Cache.

    Is it correct in stating that the limits do not exist and proc cache for example can consume additional memory?

    Thanks

    Mark

    This is my understanding as well.

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply