Clear cache after big job?

  • Hi everyone,

    We have several big jobs which execute a lot of SQL. These jobs execute every day during the night and take some time to finish.

    I suppose that all those queries are then stored in SQL Server cache right?

    Also, we have a website which uses this SQL Server database.

    Possible problem is, queries in jobs and website queries are different. So, agent jobs are used in back end to set up products, categories, brands etc. Web pages are using other procedures and other queries on front end to present products, enable online shopping etc.

    I am thinking, maybe it would be good to clear cache after jobs are finished? Maybe by using:

    DBCC FREEPROCCACHE WITH NO_INFOMSGS;

    I am worrying that cache is not used on best way after jobs are finished. During the day, I need it optimized only for website queries. I suppose that jobs queries take some resources in cache. As a consequence, website would run slower?

    So, idea is, during the night, after jobs are finished, add as a last step clearing of cache. Then, website queries will build cache again and website will work as fast as possible during the work hours. On this way jobs queries don't affect cache. All cache belongs to website queries.

    Please let me know your thoughts 🙂

    Regards

  • I wouldn't suggest doing that. SQL manages the memory in cache just fine, and by clearing it you're removing any chance that pages needed by the night time work which are also needed by the day time would remain in cache.

    It's not queries that are cached, it's the raw data pages of the tables/indexes.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I would agree with Gail. Let SQL Server manage this and don't clear your caches.

  • Thanks GilaMonster and Steve for clarifying!

    Jobs are done, night has passed, CPU usage is low, website is still fast. I guess it's all good. 😎

  • Wait just a sec - the OP asked about using DBCC FREEPROCCACHE but accepted Gail's and Grant's answers that apply to the buffer pool, so I'm confused. Boris, were you asking about clearing the plan cache or clearing the buffer pool?

    The plan cache holds execution plans that can be reused so that a new execution plan doesn't have to be created each time the same T-SQL is executed. Plans stay in cache until SQL Server needs space there and flushes plans based on age and/or amount of use (not sure exactly what criteria it uses at the moment - need to do some Googling on that) or until someone executes a command or stored proc that clears either the whole cache or a particular plan. Clearing the entire cache can cause a a sudden, temporary decrease in query performance as SQL Server creates new execution plans for every query, but that performance hit only lasts until the cache is populated and the new execution plans can be reused. There's generally no reason to totally clear the plan cache - sometimes, it may be necessary to clear a single plan so that a better plan can be compiled the next time the query runs, but this should be done only after a thorough investigation of performance problems.

    The buffer pool holds data pages or index pages that have been read from disk in memory. When SQL Server needs a data or index page to execute a query, it looks for the page in the buffer pool first; if it can't find it there, it reads the page from disk and stores it in the buffer pool. Pages stay in the buffer pool until SQL Server needs space for additional pages. The logical read of a page from memory is much faster than the physical read of the page from disk. Keeping pages in memory as long as possible increases the chance that pages will be there when requested, providing a performance gain, especially for queries that access the same pages over and over. There's generally no reason to clear the buffer pool, either - as Gail and Grant said, SQL Server manages it well on its own.

    So, Boris, the overnight jobs on your server may cause some slight performance degradation if the execution plans force the plans for the web service queries out of the plan cache (which will be remedied when a new execution plan is compiled the next time the web service queries run) or if the overnight jobs force pages used by the web service queries out of the buffer pool (which will also be remedied when SQL Server reads the pages from disk into the buffer the next time the web service queries run). The temporary performance hit often is little more than a hiccup in well-designed databases execution efficient queries. It sounds like it didn't disrupt your processes very much at all - evidence the SQL Server generally knows its business very well!

    Jason Wolfkill

  • wolfkillj (8/16/2013)


    Wait just a sec - the OP asked about using DBCC FREEPROCCACHE but accepted Gail's and Grant's answers that apply to the buffer pool, so I'm confused. Boris, were you asking about clearing the plan cache or clearing the buffer pool?

    Well, I am not quite sure what I am asking. 🙂

    I am programmer and not so familiar with SQL Server setup, but now I have to administer one SQL Server and trying my best.

    In short, there are nightly jobs which contains a lot of SQL code. Those queries are not used on website, visitors use other queries. I am worrying that after job is done, cache is not optimized for running website queries, because jobs queries on some way took some part in cache and thus there is less space for website queries.

    wolfkillj (8/16/2013)


    The plan cache holds execution plans that can be reused so that a new execution plan doesn't have to be created each time the same T-SQL is executed. Plans stay in cache until SQL Server needs space there and flushes plans based on age and/or amount of use (not sure exactly what criteria it uses at the moment - need to do some Googling on that) or until someone executes a command or stored proc that clears either the whole cache or a particular plan. Clearing the entire cache can cause a a sudden, temporary decrease in query performance as SQL Server creates new execution plans for every query, but that performance hit only lasts until the cache is populated and the new execution plans can be reused. There's generally no reason to totally clear the plan cache - sometimes, it may be necessary to clear a single plan so that a better plan can be compiled the next time the query runs, but this should be done only after a thorough investigation of performance problems.

    The buffer pool holds data pages or index pages that have been read from disk in memory. When SQL Server needs a data or index page to execute a query, it looks for the page in the buffer pool first; if it can't find it there, it reads the page from disk and stores it in the buffer pool. Pages stay in the buffer pool until SQL Server needs space for additional pages. The logical read of a page from memory is much faster than the physical read of the page from disk. Keeping pages in memory as long as possible increases the chance that pages will be there when requested, providing a performance gain, especially for queries that access the same pages over and over. There's generally no reason to clear the buffer pool, either - as Gail and Grant said, SQL Server manages it well on its own.

    So, Boris, the overnight jobs on your server may cause some slight performance degradation if the execution plans force the plans for the web service queries out of the plan cache (which will be remedied when a new execution plan is compiled the next time the web service queries run) or if the overnight jobs force pages used by the web service queries out of the buffer pool (which will also be remedied when SQL Server reads the pages from disk into the buffer the next time the web service queries run). The temporary performance hit often is little more than a hiccup in well-designed databases execution efficient queries. It sounds like it didn't disrupt your processes very much at all - evidence the SQL Server generally knows its business very well!

    Hmm, not so sure now. Jobs are not same every night. One night job was really big and after it is done server was on CPU 100%.

    I don't know if this was cause and effect, or just a coincidence. Maybe CPU 100% was caused by something else, or website worked slowly because of messed cache.

    Anyway, after running of DBCC FREEPROCCACHE, CPU was fine again. :ermm:

  • Boris Pazin (8/16/2013)


    wolfkillj (8/16/2013)


    Wait just a sec - the OP asked about using DBCC FREEPROCCACHE but accepted Gail's and Grant's answers that apply to the buffer pool, so I'm confused. Boris, were you asking about clearing the plan cache or clearing the buffer pool?

    Well, I am not quite sure what I am asking. 🙂

    I am programmer and not so familiar with SQL Server setup, but now I have to administer one SQL Server and trying my best.

    In short, there are nightly jobs which contains a lot of SQL code. Those queries are not used on website, visitors use other queries. I am worrying that after job is done, cache is not optimized for running website queries, because jobs queries on some way took some part in cache and thus there is less space for website queries.

    wolfkillj (8/16/2013)


    The plan cache holds execution plans that can be reused so that a new execution plan doesn't have to be created each time the same T-SQL is executed. Plans stay in cache until SQL Server needs space there and flushes plans based on age and/or amount of use (not sure exactly what criteria it uses at the moment - need to do some Googling on that) or until someone executes a command or stored proc that clears either the whole cache or a particular plan. Clearing the entire cache can cause a a sudden, temporary decrease in query performance as SQL Server creates new execution plans for every query, but that performance hit only lasts until the cache is populated and the new execution plans can be reused. There's generally no reason to totally clear the plan cache - sometimes, it may be necessary to clear a single plan so that a better plan can be compiled the next time the query runs, but this should be done only after a thorough investigation of performance problems.

    The buffer pool holds data pages or index pages that have been read from disk in memory. When SQL Server needs a data or index page to execute a query, it looks for the page in the buffer pool first; if it can't find it there, it reads the page from disk and stores it in the buffer pool. Pages stay in the buffer pool until SQL Server needs space for additional pages. The logical read of a page from memory is much faster than the physical read of the page from disk. Keeping pages in memory as long as possible increases the chance that pages will be there when requested, providing a performance gain, especially for queries that access the same pages over and over. There's generally no reason to clear the buffer pool, either - as Gail and Grant said, SQL Server manages it well on its own.

    So, Boris, the overnight jobs on your server may cause some slight performance degradation if the execution plans force the plans for the web service queries out of the plan cache (which will be remedied when a new execution plan is compiled the next time the web service queries run) or if the overnight jobs force pages used by the web service queries out of the buffer pool (which will also be remedied when SQL Server reads the pages from disk into the buffer the next time the web service queries run). The temporary performance hit often is little more than a hiccup in well-designed databases execution efficient queries. It sounds like it didn't disrupt your processes very much at all - evidence the SQL Server generally knows its business very well!

    Hmm, not so sure now. Jobs are not same every night. One night job was really big and after it is done server was on CPU 100%.

    I don't know if this was cause and effect, or just a coincidence. Maybe CPU 100% was caused by something else, or website worked slowly because of messed cache.

    Anyway, after running of DBCC FREEPROCCACHE, CPU was fine again. :ermm:

    Hi Boris,

    DBCC FREEPROCCACHE clears the plan cache, which means that a new execution plan must be compiled for every new query run in that database instance. The status of the plan cache usually shouldn't affect CPU usage by any noticeable amount, unless the plan cache has been cleared or SQL Server is otherwise required to compile a lot of execution plans. That hit should be pretty short, though. Once plans are cached, subsequent queries will reuse them, reducing the load on the server resources.

    If your CPU is being pegged at 100% for any length of time and that's causing problems, you'll need to find the culprit. Unfortunately, that's not always easy. The queries running on the server could be poorly written - the performance difference between a well-written query and a poorly written one can be staggering. There are different ways to write queries well and different ways to write queries poorly, and what works well in one situation may work poorly in another. The databases may lack indexes on certain tables that could boost the performance of the queries, but poorly conceived indexes can hurt performance rather than help. In many (maybe most) cases, performance tuning the database will alleviate pressure on server resources, but sometimes the sheer load on the server can be more than the hardware can handle.

    If you find that you're dealing with performance problems often, I suggest you get Grant Fritchey's book [urlhttp://www.amazon.com/Server-2012-Query-Performance-Tuning/dp/1430242035]SQL Server 2012 Query Performance Tuning[/url] (there are earlier editions for earlier versions of SQL Server, too).

    You may be feeling overwhelmed - that's natural when you've been given responsibility for SQL Server with no preparation. The SQL Server Central forums are a great place to get information and advice, and two of the three people who have replied to this thread are leading lights in the SQL Server community (yours truly being the odd man out!) . Lots of us got here the same way you did - we found ourselves thrown into the water and had to learn to swim.

    Regards,

    Jason

    Jason Wolfkill

  • Boris Pazin (8/16/2013)


    In short, there are nightly jobs which contains a lot of SQL code. Those queries are not used on website, visitors use other queries. I am worrying that after job is done, cache is not optimized for running website queries, because jobs queries on some way took some part in cache and thus there is less space for website queries.

    As explained by Gail, Steve and Wolfkillj, it does not really work that way. The plans for the web queries will remain in cache, as long as there is space for them.

    However... If the jobs runs the same queries as the web app, but with different parameters, it may be different. In this case you may end up with plans that are not optimal for the web application. Or it could be the other way round, the job runs with the plans for the web queries.

    This is because of something known as "parameter sniffing": SQL Server looks at the parameter values to determine the best query plan. If the parameter changes, this can backfire. Most likely, if you saw a benefit on your system with DBCC FREERPOCCACHE, you had a problem related to parameter sniffing - but if that is due to the job or not far from certain. And if may not be due to the queries of the job, but it may be the fact that the jobs adds a lot of rows to some tables and so on.

    Using FREEPROCCACHE to resolve such a problem is a bit of a sledgehammer. A better approach, in my opinion, would be to track down the problematic queries.

    [font="Times New Roman"]Erland Sommarskog, SQL Server MVP, www.sommarskog.se[/font]

  • Erland Sommarskog (8/16/2013)


    ... but it may be the fact that the jobs adds a lot of rows to some tables and so on.

    Maybe this could be a reason. Once per week, job deletes all data from 16 big tables and writes them again.

    Biggest table has about 76 millions of rows. So, I guess deleting and re-creating 76M of rows could be harmful for all queries which use that table?

  • Boris Pazin (8/17/2013)


    Biggest table has about 76 millions of rows. So, I guess deleting and re-creating 76M of rows could be harmful for all queries which use that table?

    No, doing that is fine. That kind of volume of data changes would invalidate all plans, so they'd get recompiled the next time they run.

    Where you can have problems is with smaller data changes, around 10% if it changes the results for queries dramatically (eg http://sqlinthewild.co.za/index.php/2011/03/22/statistics-row-estimations-and-the-ascending-date-column/), or when you run identical queries/procedures with different parameter values that will result in radically different data volumes.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Boris Pazin (8/17/2013)


    Biggest table has about 76 millions of rows. So, I guess deleting and re-creating 76M of rows could be harmful for all queries which use that table?

    As Gail says, no.

    If I bring out my crystal ball, my guess is this: once the job has completed, the web app runs queries to update a cache, and in the wee hours of morning it flushes the cache and reads all data. During the day it runs the queries to update the cache to get the recent changes. The plans to get all from the cache are not good for getting the updates.

    [font="Times New Roman"]Erland Sommarskog, SQL Server MVP, www.sommarskog.se[/font]

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply