Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Slow running query Expand / Collapse
Author
Message
Posted Friday, February 14, 2014 4:18 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, October 17, 2014 7:55 AM
Points: 40, Visits: 389
Hi

Following on my query about tempdb allocation contention, I thought I'd follow up with more detail in case people have any pointers.

We've a complex overnight process; a job with 65 steps. One step, which starts at 4am, can take (usually takes!) 25 minutes, but occasionally takes 90-120 min. No consistency as to when the time extends; nothing else happening SQL wise at the same time.

The step itself inserts 138 rows into a temp table, then does an insert from a select. The select itself is complex (10 inner joins, with a couple of INs and Cases thrown in). Not only that, it does one sum, grouped on a further 26 rows. So tempdb will be hit a lot.

The execution plan shows a subtree cost of 410, and the potential to create on index, albeit that has a potential saving of only 14. The resulting dataset/insert is 6.5m rows.

I can run the select during the day - at any point - and it takes c09:48, so i don't think I can do much saving on the select (but I'm willing to be convinced!).

Tempdb now has 8 datafiles, on it's own disc, which is in a group of 128 *15k discs. According to spotlight there are IO issues at 4-5am, but not what I'd call that significant. And when I run the script during the day there are no issues at all.

Any ideas? Would it be a VM issue - back up of servers say? Something else? Really running out of ideas.

Many thanks

pete

Post #1541535
Posted Friday, February 14, 2014 6:00 AM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Today @ 3:26 PM
Points: 13,871, Visits: 28,266
It sure sounds to me like the query could be tuned, but as to the long running stuff, I'd assume contention on resources. While it's running, have you collected wait statistics? Have you watched the server during this period to see if that process is blocked by other resources? Just because not much is happening doesn't mean absolutely nothing is happening, and this sounds like a resource contention issue.

----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server Query Performance Tuning
SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #1541563
Posted Friday, February 14, 2014 6:19 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, October 17, 2014 7:55 AM
Points: 40, Visits: 389
I've written a few scripts - tempdb is obviously being hit quite heavily, with a massive IO spike across all 8 datafiles at that particular point. The top queries at that time - or rather before hand - are pretty small, and shouldn't really be causing an issue. Page life expectancy is still over an hour, CPU is only 27%, there are no blocks; the only red on my spotlight (vainly trusting technology!) is the I/O waits on tempdb.

As I say, there doesn't appear to be anything SQL wise hitting the system (there are surprisingly a few I/O waits for other DBs that nothing official is happening on).

Looking at the windows side of spotlight - again, nothing - solidly green throughout the period in question.

Hence my thought it may well be a SAN issue, rather than SQL.
Post #1541564
Posted Friday, February 14, 2014 6:25 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 3:09 PM
Points: 40,172, Visits: 36,560
Can you monitor execution plans (via extended events), get a plan from a fast execution and a plan from a slow execution?
Can you monitor/record wait stats per query during its execution (poll sys.dm_os_waiting_tasks) or extended events and see what the query is waiting on when it executes fast and what it's waiting on when it executes slow?



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1541568
Posted Friday, February 14, 2014 6:31 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, October 17, 2014 7:55 AM
Points: 40, Visits: 389
Good idea - I'll set up an Xevent today - see if it can tell me something.

Course, bound to be speedy tonight.

pete
Post #1541570
Posted Friday, February 14, 2014 9:46 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Today @ 6:56 PM
Points: 4,400, Visits: 6,257
Not much to go on here, but first thing I would do is put OPTION (RECOMPILE) on that big mess of a query. If that isn't a repeatable magic bullet to keep you at the 25 minute run time then you will likely have to go in and break the big query down into one of more temporary table-based (NOT table variable) intermediate result sets. I have done this over and over for clients to often pick up 3-5 orders of magnitude performance improvements PLUS gain normalcy on the execution times. You may wish to do this anyway because that 25 minutes may only need to be 3 minutes!

Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1541659
Posted Friday, February 14, 2014 9:52 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, October 17, 2014 7:55 AM
Points: 40, Visits: 389
funnily enough I've been thinking along the same lines, but the joins are so complex, with the tables at the bottom of the script joining four or five others further up, it may be tricky.

What I've thought this afternoon is the insert bit of the script. Trying to insert 6m rows onto a table that has had indexes (indicies?) rebuilt that step before is ridiculous; one of which had a fillfactor of 0/100. The longer serving members of staff insist they're needed, as the main table reference for the select part of the insert is the table that gets inserted into!

I'll do some checking next week, with index hints, to see whether the degradation of performance experienced by removing the index is gained back by inserting into a heap with none.
Post #1541660
Posted Friday, February 14, 2014 10:48 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Today @ 6:56 PM
Points: 4,400, Visits: 6,257
peter.cox (2/14/2014)
funnily enough I've been thinking along the same lines, but the joins are so complex, with the tables at the bottom of the script joining four or five others further up, it may be tricky.

What I've thought this afternoon is the insert bit of the script. Trying to insert 6m rows onto a table that has had indexes (indicies?) rebuilt that step before is ridiculous; one of which had a fillfactor of 0/100. The longer serving members of staff insist they're needed, as the main table reference for the select part of the insert is the table that gets inserted into!

I'll do some checking next week, with index hints, to see whether the degradation of performance experienced by removing the index is gained back by inserting into a heap with none.


It is likely because the query is so complex that you really need to break it down. Small anomalies in row count estimates can get compounded into gross mis-matches, leading to suboptimal plans and significantly longer runtimes and total effort.

I just posted to another thread on SSC.com about how very few times I have found indexes on temp tables to be more efficient for the entire process than having them without. I also question, like you do, having the indexes on the table in the first place before putting in the millions of rows of data.


Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #1541682
Posted Thursday, February 27, 2014 9:48 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, October 17, 2014 7:55 AM
Points: 40, Visits: 389
In case anyone's still interested (!) I've pretty much worked out what the issue was; after fully dissecting the code (as I say - there's a lot; 65 steps, with some of the SPs running into 2k lines) I worked out one of the main tables was deleting 38m rows (out of of 41m), then reinserting them again, having been recalculated.

Obviously the SP that referenced the [now completely new] table had been optimized for different data. Adding the Recompile - either as "with recompile", or "option (recompile)" - for the complicated ones - does seem to have made all difference. It also explains why the scripts ran so fast when taking the SPs apart.

So far all seems okay. We're four days into the new regime and the job that had been taking up to 9.5 hours is now back under 7. Amazing difference.

Course, still savings to be made on other areas.

Thanks all though

Post #1545979
Posted Thursday, February 27, 2014 9:52 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 4:50 PM
Points: 17,807, Visits: 15,727
peter.cox (2/27/2014)
In case anyone's still interested (!) I've pretty much worked out what the issue was; after fully dissecting the code (as I say - there's a lot; 65 steps, with some of the SPs running into 2k lines) I worked out one of the main tables was deleting 38m rows (out of of 41m), then reinserting them again, having been recalculated.

Obviously the SP that referenced the [now completely new] table had been optimized for different data. Adding the Recompile - either as "with recompile", or "option (recompile)" - for the complicated ones - does seem to have made all difference. It also explains why the scripts ran so fast when taking the SPs apart.

So far all seems okay. We're four days into the new regime and the job that had been taking up to 9.5 hours is now back under 7. Amazing difference.

Course, still savings to be made on other areas.

Thanks all though



Congrats - good to hear it is better.




Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server, MVP


SQL RNNR

Posting Performance Based Questions - Gail Shaw
Post #1545981
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse