November 10, 2011 at 6:12 am
Ninja's_RGR'us (11/10/2011)
I've pinged Gail for this one as she knows more about those internals.AFAIK, the lock taken by update stat is a schema lock. Meaning you can't alter the object you're working on (like dropping it or changing columns definition).
Every single query that runs takes a schema stability lock (Sch-S). This is to ensure that the object can't change structure while being read (could you imagine the chaos if a column was dropped while a select was running)
These locks are compatible with every single lock other than Sch-M (schema modification). Schema mod locks are only taken by things that modify the definition of the objects - ALTER TABLE, ALTER INDEX
In a normal prod environement with controled change management, that will never cause any issues.
In any environment, unless there are ALTER TABLE statements running every seconds, Sch-S locks cause no problems.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 10, 2011 at 6:15 am
Usman Butt (11/10/2011)
Seems a bit late but just returning from vacations..see this as pending notification.. Can you please elaborate the NOLOCK behind the scene process? What if some rows are being inserted/deleted in the table while updating the stats?
Doesn't matter. Stats are an inaccurate approximation of the data anyway
What if we try to change the data type of a column from varchar(100) to varchar(50)? Will these be blocked OR NOT?
Yes, because no concurrent reads can be done of any form while the table's definition is being changed.
I think NOLOCK may not be right word to use and some blocking (Extra wait as you said π ) may occurs.
Nolock is exactly the right word to use. Stats updates run in the read uncommitted isolation level, which is completely equivalent to the NOLOCK locking hint.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 10, 2011 at 6:15 am
Tx for the help Gail.
November 10, 2011 at 6:23 am
Lexa (11/2/2011)
So basically you cannot "over" update statistics, right?
No, but there's a point of diminishing returns, where the gain from the new stats has no effect on the query plans other than forcing recompiles. That point's going to differ per object, if not per statistic. I've seen cases where a particular stat had to be updated (with fullscan) every hour. But that's the extreme case.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 10, 2011 at 6:24 am
Ninja's_RGR'us (11/1/2011)
It's different than updatestats will fullscan. It filters out some stats based on many conditions.
It excludes stats that have had 0 changes since the last update, but that's all
Also it doesn't update with full scan. There's another algorithm for that.
It can, depending on the parameters passed.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 10, 2011 at 6:25 am
GilaMonster (11/10/2011)
Lexa (11/2/2011)
So basically you cannot "over" update statistics, right?No, but there's a point of diminishing returns, where the gain from the new stats has no effect on the query plans other than forcing recompiles. That point's going to differ per object, if not per statistic. I've seen cases where a particular stat had to be updated (with fullscan) every hour. But that's the extreme case.
And Grant has a case where it's fullscan every 15 minutes. And that's the longest they can go between updates.
Here I do once a day and I never had a case in 12 months where I had to redo them during the day. It's a small-ish db with 25 GB of data. But I guess that,s kind of the norm for mid-sized companies' erps.
November 10, 2011 at 7:48 am
What do you guys and gals think about this statement from my boss:
"We gets tons of new data quickly, statistics became obsolete quickly; when that happens, query optimizer is waiting for statistics update to complete before compiling a new plan;
With auto_update_statistics_async on, compiler wonβt wait, would use the old plan, subsequent query compiling will used freshed statistics."
Does this sound like a good thing for databases getting hundreds of rows a second that are also being reported on regularly?
Thanks,
Jared
Jared
CE - Microsoft
November 10, 2011 at 7:52 am
Maybe, but you'll have to test.
Have you monitored and seen queries waiting for stats updates? Compile sitting waiting? Frequent auto updates?
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 10, 2011 at 7:53 am
Yes, but I would consider 2 things.
How long to update the stats?
Also why not schedule the stats updates in a job as frenquency requires?
November 10, 2011 at 7:55 am
We should start dressing the same way! π
November 10, 2011 at 8:01 am
GilaMonster (11/10/2011)
Maybe, but you'll have to test.Have you monitored and seen queries waiting for stats updates? Compile sitting waiting? Frequent auto updates?
I have not personally monitored this, it was something my boss was working on and she just let us know what she was trying. It seems to help a lot so far. We have millions upon million of rows and the brilliant developers are running ad hoc queries for these reports.
Also, I don't know how long it takes to update these stats, but I will ask her when she gets in and see if she knows. Then we may be able to schedule something on off hours. Of course, this database in particular does not really have a slow down unless you look at it relatively.
Thanks,
Jared
Jared
CE - Microsoft
November 10, 2011 at 8:04 am
If you haven't monitored, then you have no grounds for a decision either way. These kind of things shouldn't be guesses, they should be informed decisions based on solid data.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
November 10, 2011 at 8:10 am
GilaMonster (11/10/2011)
If you haven't monitored, then you have no grounds for a decision either way. These kind of things shouldn't be guesses, they should be informed decisions based on solid data.
As I am learning π Although, I cannot assume she did not do the monitoring before changing this.
"One of my finding is that some fine-tuned query sometimes slows down; it improves right away after I update the statistics."
Not very specific, but I will make sure to check with her, otherwise we could be causing more problems than we are fixing.
Thanks,
Jared
Jared
CE - Microsoft
November 10, 2011 at 8:15 am
Also Gail, could you point me in a direction or tell me how to test/monitor this? Would I do it for a specific query or look at wait stats on the server? Basically, how would YOU do it? That's the recipe I need so I can make my own bread and stop asking for it π
Jared
Jared
CE - Microsoft
November 10, 2011 at 8:42 am
Start with the auto_stats events. See how often they really happen.
For queries affected by poor stats, normal monitoring and look for queries with erratic durations. Investigate and see if stats updates fix them (this takes time). For those that you've identified, schedule more frequent stats updates.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
Viewing 15 posts - 16 through 30 (of 34 total)
You must be logged in to reply to this topic. Login to reply