July 17, 2013 at 9:19 am
Hi
Late in the afternoon i had a support ticket raised for 'the app'. It was running slow around 9.30am untill 11.00am
I checked the machine:processor time using 'sql monitor' and sure enough the processor was pegged up at 80 - 90% usage for the time period where 'the app' was running slow.
I have used the back in time feature of sql monitor but all i can see query wise is the sql monitor queries.
Can you use the default traces to look at what may have caused this peak in cpu and if so how do i go about it ?
Thanks
PJ:-)
July 17, 2013 at 9:38 am
no, not the default trace; you'd need to use the dmvs instead.
the default trace only tracks DDL or database settings changes...create/drop, alter, stuff like that; nothing related to Data access o rserver performance.
possibly, you could look at the top x hgighest io queries that exist in the cache...it might help track down the potential pain points:
:
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 20
[Total IO] = (qs.total_logical_reads + qs.total_logical_writes)
, [Average IO] = (qs.total_logical_reads + qs.total_logical_writes) /
qs.execution_count
, qs.execution_count
, SUBSTRING (qt.text,(qs.statement_start_offset/2) + 1,
((CASE WHEN qs.statement_end_offset = -1
THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2
ELSE qs.statement_end_offset
END - qs.statement_start_offset)/2) + 1) AS [Individual Query]
, qt.text AS [Parent Query]
, DB_NAME(qt.dbid) AS DatabaseName
, qp.query_plan
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
CROSS APPLY sys.dm_exec_query_plan(qs.plan_handle) qp
ORDER BY [Total IO] DESC
Lowell
July 18, 2013 at 9:50 am
Ok Thanks for the reply - i will give that a try - Thank you
🙂
Viewing 3 posts - 1 through 2 (of 2 total)
You must be logged in to reply to this topic. Login to reply