October 28, 2013 at 5:25 am
Hi
I'm having performance issues with a particular query on SQL Server 2008. The query is like this:
exec sp_executesql N'
select
DISTINCT top XXX
<8 non-calculated fields selected>
from
Table1
join Table2 on (Table1.P5 = Table2.P5 and ((Table2.P2 = @p2)))
where
(( Table1.P1 >= p1 and Table1.P3 is not null))
order by
Table1.P4 asc,
Table1.P5 asc',N'
@p1 datetime,
@p2 uniqueidentifier',
@p1='2013-08-26 22:00:00',
@p2='E9818E45-C3CE-E111-8A7A-005056BB0042'
The query plan generated when run in SSMS is this:
Table1 contains just over 100.000 rows and Table2 contains just over 3.1 million rows. Certainly nothing major. All indexes are recently reorganized.
What my problem is, is that the query is very frequently run and more often than not CPU usage (as seen in sql server profiler) is > 1000 ms, sometimes more than 20.000 ms!
What i've tried so far (all to no effect):
1. I've taken about 20 of the querys straight from sql server profiler and run them in SSMS, they all complete very quickly (< 3 seconds total)
2. DBCC FREEPROCCACHE to clear the plan
3. Reorganized all indexes with fragmentation > 5% (no index of any size was more than 30% fragmented)
4. Disk latency as seen in perfmon looks to be ok
I'm baffled and my usual "toolbox" is all used up. Any help is appreciated...
October 28, 2013 at 10:28 am
Do you get the same plan when run through the application?
Do you always get the same plan when using different parameters?
How different are the estimated to actual rows for each operation?
It may be parameter sniffing that is causing the query to perform poorly via the application.
October 28, 2013 at 10:29 am
Check the properties of each index seek in the actual execution plan and see how often each is running. It takes a whole lot longer and uses a heck of a lot more resources to do (for example) 100,000 index seeks than it does to do a single scan on a 100,000 row table.
--Jeff Moden
Change is inevitable... Change for the better is not.
October 29, 2013 at 3:28 am
Hi guys!
Thanks for the input, it appears that a simple UPDATE STATISTICS did the trick.
October 29, 2013 at 8:21 am
mbn 46383 (10/29/2013)
Thanks for the input, it appears that a simple UPDATE STATISTICS did the trick.
Statistics update invalidates the query plans for all tables whose stats were updated. I am guessing that if it was purely statistics you would have had similar performance when running the query in SSMS.
If you had a "bad plan" due to parameter sniffing then when you updated the stats that plan was invalidated and the next time the query ran a new plan was compiled. This plan may have been compiled with different parameters that created a more optimal plan for the majority of parameters. If it was a bad plan due to parameter sniffing the issue may return.
check out this post http://www.sqlsoldier.com/wp/sqlserver/misunderstoodandabusedupdatestatisticscommand
Viewing 5 posts - 1 through 5 (of 5 total)
You must be logged in to reply to this topic. Login to reply