Can we execute sp_updatestats in heavy transaction time

  • Our database becomes very slow in heavy transaction day after being transaction about 7000 with in 2 hours even i update database stats using sp_updatestats at night. We have 2 days specially heavy transaction per month. After executing sp_updatestats in heavy transaction time around mid day, same query will be executed in 1 seconds where it used to about 2 minutes just before executing sp_updatestats. We have database about 50GB. It takes about 3-4 minutes to execute sp_updatestats in our database.

    1. Can we execute sp_updatestats during heavy transaction while database performing slow

    2. Could you please suggest if there is any better way to overcome slow performance during heavy transaction.

    Currently, During maintenance plan, backup both database and log, update stats will be done every night.

    Mostly twice a month, Index reorganize and index rebuild currently being done.

  • You should identify which stats need updating rather than updating all the stats. You probably have some queries that perform poorly due to stats out of date and you should start your investigation from those queries.

    Take the query and look at the actual execution plan: do you see wildly inaccurate estimates? Then you need to understand where they come from. Do you have new data in the tables which is not accounted for? Maybe you should enable trace flag 2389 and/or 2390 to enable statistics branding (useful for ever increasing columns).

    If you have huge tables with stats that are not updated due to the inability to reach the 20% threshold of modified rows, maybe enabling the trace flag 2371 will help you (it enables an adaptive threshold for big tables).

    If you have stats that are not representative of the data unless based on a 100% sample, you should disable auto updating them by using NORECOMPUTE and set up a manual update process using a SQL Server Agent job.

    You can read more about TF 2371, 2389 and 2390 here: https://www.simple-talk.com/sql/database-administration/statistics-on-ascending-columns/

    -- Gianluca Sartori

  • 1. Can you? Yes. Should you? Probably not. sp_updatestats is extremely undisciplined in which statistics it updates. Remember, when stats get updated, not only are you adding the load to the system for the statistics update process, but, after that process completes, any plans in cache that reference those stats is going to be recompiled the next time it gets referenced. That's going to add quite a bit of locking and resource contention, especially around your CPU.

    2. Gianluca has already made a number of great suggestions and I strongly support you following those. I'd add the idea of targeting statistics updates where needed using UPDATE STATISTICS, not the generic approach of sp_updatestats. You might also want to test whether or not setting your automatic update of statistics to async is helpful. That one is not a sure thing. It helps in some cases, it doesn't in others.

    Now, should you run the individual UPDATE STATS commands during high load on the system? Maybe. There's no hard and fast answer there. Experimentation and testing will be the key. I've done it to great success in some situations. It really depends on if the cost incurred from a targeted statistics update offsets the cost of bad execution plans causing poor performance. In many cases, it can, but not all.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I was checking the statistics objects in my Microsoft SQL Server database and found some statistics with strange names. Could you please explain what the _WA_Sys... statistics are? Why are they created? Can I safely delete them? Do they differ from the other statistics objects?

    and recreate with following commands help to boost performance?

    EXEC sp_createstats @indexonly = 'NO', @fullscan = 'FULLSCAN', @norecompute ='NO'

    Could you please suggest?

  • Those stats are created automatically by SQL Server. If you want to replace them with custom statistics, you can do it.

    -- Gianluca Sartori

  • Dear Gianluca Sartori,

    Is that beneficial to drop and recreate a more user-friendly name? Does it help in performance of insert, update, select operation? Could you please suggest?

    Following block say it is beneficial to boost performance.

    http://www.mssqltips.com/sqlservertip/2734/what-are-the-sql-server-wasys-statistics/

  • Those stats get created when someone does a filter operation like a WHERE clause or a JOIN on columns that don't have either an index or custom statistics. The optimizer will then create statistics. They're usually helpful. Generally, I never touch them. Could you remove them all just to see which ones get recreated? Sure. Will that help your system out? A tiny amount. It's actually not a good use of your time. I'd just leave them alone until you're looking to tune a query, then refer to them to see where people are referencing columns where you don't have indexes. It might prove beneficial then.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I would find out which queries perform poorly, look at their execution plans when it runs good and when it runs poorly. If no other tuning indexes can help it run better than you can run update stats on the specific tables that have queries running poorly on. That way you are spending the resources updating stats on only the tables you need updated. Benefit to you will the stats job will run quicker and cause less CPU consumption.

    I found 4 tables here in a specific db that had that same senario happening. The indexes were good, the problem became that stats during certain data operations became stale so I just put together a job that runs right after the mass data change to update stats on those specific tables. Fixed the issue.

  • Gianluca Sartori / Grant Fritchey,
    There is a daily sp_update stats (on db which is 4TB) that is causing high CPU. To reduce the CPU, it is requiring us to perform Update Statistics of 5 specific tables manually. Query causing high CPU after the sp_update stats has been identified. It is performing an index scan instead of index seek. Once the Update Statistics on 5 tables is performed, query performs an index seek. Why is SP_update stats causing this issue and Update statistics is not? Shouldn't they both be doing the same?
    It is a  high OLTP system with 2.5 million transactions per hour with heavy reads/writes. 
    How do we find a balance between SP_update stats versus Update Statistics? 
    We want to make sure statistics on the database is up to date for optimal query optimization.
    Thoughts??

    Thanks!

    Prakash B

  • SQL ADMIN 79 - Saturday, February 24, 2018 6:54 AM

    Gianluca Sartori / Grant Fritchey,
    There is a daily sp_update stats (on db which is 4TB) that is causing high CPU. To reduce the CPU, it is requiring us to perform Update Statistics of 5 specific tables manually. Query causing high CPU after the sp_update stats has been identified. It is performing an index scan instead of index seek. Once the Update Statistics on 5 tables is performed, query performs an index seek. Why is SP_update stats causing this issue and Update statistics is not? Shouldn't they both be doing the same?
    It is a  high OLTP system with 2.5 million transactions per hour with heavy reads/writes. 
    How do we find a balance between SP_update stats versus Update Statistics? 
    We want to make sure statistics on the database is up to date for optimal query optimization.
    Thoughts??

    Thanks!

    You posted this to a thread that is 3 years old - it would be better if you posted this as a separate question.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • SQL ADMIN 79 - Saturday, February 24, 2018 6:54 AM

    Gianluca Sartori / Grant Fritchey,
    There is a daily sp_update stats (on db which is 4TB) that is causing high CPU. To reduce the CPU, it is requiring us to perform Update Statistics of 5 specific tables manually. Query causing high CPU after the sp_update stats has been identified. It is performing an index scan instead of index seek. Once the Update Statistics on 5 tables is performed, query performs an index seek. Why is SP_update stats causing this issue and Update statistics is not? Shouldn't they both be doing the same?
    It is a  high OLTP system with 2.5 million transactions per hour with heavy reads/writes. 
    How do we find a balance between SP_update stats versus Update Statistics? 
    We want to make sure statistics on the database is up to date for optimal query optimization.
    Thoughts??

    Thanks!

    I agree with Jeffrey. You're going to get a lot more attention if you ask the question on a separate thread.

    The difference is that sp_updatestats always does sampled updates, not full scans. UPDATE STATISTICS offers more control. Sounds like that's what you need.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply