Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

sp_updatestats Versus Update Statistics Expand / Collapse
Author
Message
Posted Thursday, September 20, 2012 5:00 AM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Monday, July 14, 2014 6:53 AM
Points: 550, Visits: 1,061
Hi,
After I've read a couple of articles & suggestions on forums, we decided to change our maintenance plans slightly.

Our initial plans looked something like this:
1. Rebuild all indexes with a fragmentation level above a certain criteria.
2. Run sp_updatestats.

And to be honest, it worked great...

But then - I wanted to improve on this...
Now, we're Re-organising indexes that must be organized & rebuild indexes that must be rebuild. (using the suggested fragmentation levels from various sources on when to rebuild & when to re-organized).

The second step updates the tables with the Full Scan option using the script below to check if it must be updated.

IF EXISTS (select TOP 1  TableName from ( select           
ss.name SchemaName,
so.name TableName,
so.id ObjectId,
st.name AS StatsName
, si.RowModCtr
, (select case si2.RowCnt
when 0
then 1
else si2.RowCnt
end from sysindexes si2
where si2.id = si.id
and si2.indid in (0,1)) RowCnt
from sys.stats st
join sysindexes si on st.object_id = si.id and st.stats_id = si.indid
join sysobjects so on so.id = si.id and so.xtype = ''U'' --user table
join sys.schemas ss on ss.schema_id = so.uid
WHERE so.name = ''' + @tblName + '''
AND ss.name = ''' + @CurrentSchema + '''
) stat
where cast(stat.RowModCtr as float)/cast(stat.RowCnt as FLOAT)*100 >= 1 --more than 1% of the rows have changed


This part works. It updates all the tables that I've expected it to update. (I keep track of what's being updated using a "StatsReport Table").

The problem is, once this script finished, and all tables stats are updated (using FullScan) the performance on some of my queries are down the drain!
CPU spike up to 100%, and the systems is all dead!

So far I've isolated the one particular query that "hangs" after I've done the update statistics with fullscan, causing the CPU to spike.
It's probably use the incorrect execution plan...

To resolve this issue, I first kill all those queries that's running.
Then run sp_updatestats.
Problem gone!

Well, until I run the update stats with full scan again. :-(

I thought that the update statistics with full scan is much better, as it won't use a sample % of data, but will use the full set to build the stats?

I'm running this same code on a couple of databases, but so far, it's only this one DB, and this one Proc that's giving me issues.

Not sure how to give you sample data & code to test this one, as it's using at least 9-10 tables with millions of records in the proc...

But maybe you can explain to me why the update stats with full scan can do this?











Post #1361873
Posted Thursday, September 20, 2012 6:21 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Yesterday @ 6:30 PM
Points: 15,518, Visits: 27,900
It sounds like you're running into a bad parameter sniffing issue on a few of your queries. Basically (and yes, there are loads of detail I'm glossing over) sp_updatestats samples the data while UPDATE STATISTICS WITH FULLSCAN does a complete read of the data. This makes the full scan much more accurate, but not necessarily better, depending on the query you're dealing with. The fact is, some queries work better with less specific plans and some work better with more specific plans. Identifying which is which when hitting a bad parameter sniffing issue leads, at least partially, to the appropriate solution.

In general, I'd say you want to make sure your statistics are as accurate as possible. If that hurts some queries, then supply those queries with a solution such as OPTIMIZE FOR UNKNOWN or local variables so that you get a sampled scan of the statistics rather than specific values against your stats.

Also, one point frequently missed, if you rebuild your indexes, that updates your statistics with a full scan. You don't need to do another update after that. In fact, if you do another update after that, that's sampled, you're changing the statistics.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #1361906
Posted Thursday, September 20, 2012 7:54 AM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Monday, July 14, 2014 6:53 AM
Points: 550, Visits: 1,061
Grant Fritchey (9/20/2012)
It sounds like you're running into a bad parameter sniffing issue on a few of your queries. Basically (and yes, there are loads of detail I'm glossing over) sp_updatestats samples the data while UPDATE STATISTICS WITH FULLSCAN does a complete read of the data. This makes the full scan much more accurate, but not necessarily better, depending on the query you're dealing with. The fact is, some queries work better with less specific plans and some work better with more specific plans. Identifying which is which when hitting a bad parameter sniffing issue leads, at least partially, to the appropriate solution.

In general, I'd say you want to make sure your statistics are as accurate as possible. If that hurts some queries, then supply those queries with a solution such as OPTIMIZE FOR UNKNOWN or local variables so that you get a sampled scan of the statistics rather than specific values against your stats.

Also, one point frequently missed, if you rebuild your indexes, that updates your statistics with a full scan. You don't need to do another update after that. In fact, if you do another update after that, that's sampled, you're changing the statistics.


Thanks - will take a look at the parameter sniffing on this particular proc...

The entire idea of why I've changed the code, was the fact that I don't need to update the stats after the index rebuild.
That's why the code in my previous post, is checking the amount of records that changed after the last stats update. (Which in theory should be almost 0, as it was updated with the index rebuild).

thx!
Post #1361989
Posted Monday, October 22, 2012 2:34 AM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Monday, July 14, 2014 6:53 AM
Points: 550, Visits: 1,061
Hi,
I still have an issue with this particular procedure. :-(

This is what I've done to try to get the issue resolved:
1. Declared local variables, assigned the parameter variables to the local variables.
2. Added a "With Recompile" option on the stored procedure level.
3. Added OPTION (OPTIMIZE FOR UNKNOWN) to the particular code that's giving me issues.

(All 3 options are currently implemented)

This proc is called +- 100 times each hour - so it's not a highly active procedure. (So I don't mind that it recompiles every time).

I'm going to try the following to get this resolved:
The stored procedure consists of 10 different queries. Each query is populating/updating a # table and/OR a table variable.
(The code passed between multiple developers, so some used # tables others table variables).
I'm going to try to split each query into its own stored procedure - instead of this massive proc.

If this doesn't work, I will convert the queries to dynamic SQL.

And if that doesn't work... ouch...

Any other ideas on what I might have missed here?

This only happens on production, and not on QA. We have the same indexes/structures on both environments.
The only differences is the overall workload/updates done on prod is much higher than qa...

thx









Post #1375304
Posted Monday, October 22, 2012 5:08 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Yesterday @ 6:30 PM
Points: 15,518, Visits: 27,900
Wandrag (10/22/2012)
Hi,
I still have an issue with this particular procedure. :-(

This is what I've done to try to get the issue resolved:
1. Declared local variables, assigned the parameter variables to the local variables.

This means that the parameters won't be sniffed and instead the average values from the statistics will be used. It's a method to get a "generic" plan as opposed to a specific plan.

2. Added a "With Recompile" option on the stored procedure level.

This is a method of solving bad parameter sniffing to try to get specific execution plans for specific values. It's basically at odds with what you did in step #1. But because of what you did in step #1, this is doing nothing to change the plans created because your parameter values are not being sniffed, unless you get statement level recompile, then the local variables can be sniffed, but not if you set this at the procedure level.

3. Added OPTION (OPTIMIZE FOR UNKNOWN) to the particular code that's giving me issues.

This is another way to try to get generic plans, like Step #1. It's not needed if you're implementing Step #1. It's also conflicting with the recompiles from Step #2.


(All 3 options are currently implemented)

This proc is called +- 100 times each hour - so it's not a highly active procedure. (So I don't mind that it recompiles every time).

I'm going to try the following to get this resolved:
The stored procedure consists of 10 different queries. Each query is populating/updating a # table and/OR a table variable.
(The code passed between multiple developers, so some used # tables others table variables).
I'm going to try to split each query into its own stored procedure - instead of this massive proc.

If this doesn't work, I will convert the queries to dynamic SQL.

And if that doesn't work... ouch...

Any other ideas on what I might have missed here?

This only happens on production, and not on QA. We have the same indexes/structures on both environments.
The only differences is the overall workload/updates done on prod is much higher than qa...

thx


In general, you've attempted three different fixes to bad parameter sniffing instead of a single fix to see if it does what you need. I would look at either using Option 2 to get specific plans or Option 3 to get generic plans, but not all three.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #1375342
Posted Tuesday, October 23, 2012 2:29 AM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Monday, July 14, 2014 6:53 AM
Points: 550, Visits: 1,061
Hi Grant,
I've tried them one at a time, then later different combinations of the three options above...
In the end I've decided screw it - try all 3 at once...

None of the options & combinations worked....
That's why I'm doing the re-write

thanks.
Post #1375900
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse