October 15, 2008 at 4:48 am
Check with dbcc opetran will give you any longest executing query.
October 15, 2008 at 5:21 am
use sp_who2 and dbcc commands, to know the issue.
October 15, 2008 at 6:19 am
Since your server is stressed, you'll want to be judicious about using this, but I'd suggest placing a trace on the server to capture the queries that are being called. Yes, doing spot checks with sp_who2 will likely show you which process is causing all the blocks, but that can change over time. Getting a good set of actual queries run against your server, the order they're called in, their cost in time, cpu and I/O... Knowing this information will better enable you to troubleshoot the root problems.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 15, 2008 at 7:13 am
You could set a minimum duration in Profiler/Trace, grabbing everything that takes longer than say, 10 sec. That would help.
October 15, 2008 at 7:16 am
something even stranger... when i run a sql profiler as soon as i press 'run' on start trace the cpu on the server drops down to around 30-40% - but then all application are totally unusable. When i then pause or stop the trace the CPU instantly jumps back up to 100%...
any ideas?
Oraculum
October 15, 2008 at 8:29 am
Yikes. That doesn't sound good.
First, don't use Profiler on the production server. Instead, set up a server side trace. Second, use Steve's advice and put a filter on the trace so that it captures only events that are of a certain duration or use a certain amount of cpu or something else. Third, this is general advice for the trace anyway, minimize the data collected, only capture the events you need for this, usually statement complete and rpc complete are enough, and only collect the columns you need. Finally, again, general trace advice, output to a file, not to a table. Roll over the file every 50mb or so.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 15, 2008 at 8:48 am
If you're having problems with trace, you can try querying the procedure cache. It should pick up the most expensive queries.
select total_worker_time/execution_count/1000 as AverageCPUTime, total_elapsed_time/execution_count/1000 as AvgDuration, st.text
from sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) st
where (total_worker_time/execution_count) > 10000000 -- 10 seconds in microseconds
order by total_worker_time/execution_count
Change the where clause if you want to see queries that used less than 10 sec cpu time.
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 15, 2008 at 8:48 am
I'm so stupid...
This is 2005 right? Toss trace. Run a query against sys.dm_exec_query_stats. This is a dynamic management view that contains information about the queries currently in cache. You can get everything you need, real time, without having to use a trace (trace is still useful, but in this instance, it isn't).
Here's an example script to get you started, but you can look up the various DMV & DMF's used to get more:
SELECT sql_handle, plan_handle, execution_count
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY sys.dm_exec_sql_text (qs.plan_handle) AS sql
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 15, 2008 at 8:50 am
OK. That's twice. It should stop now.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 15, 2008 at 8:57 am
Grant Fritchey (10/15/2008)
OK. That's twice. It should stop now.
:hehe: 😀 Great minds think alike?
I'll go do something else for a while...
Gail Shaw
Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability
October 15, 2008 at 8:58 am
Basic question, but have you established that it is coming from SQL Server?
In other words, have you used Task Manager, or Process Explorer or some equivalent to look at the system and see what process(s) is taking up the most CPU?
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
October 15, 2008 at 9:00 am
GilaMonster (10/15/2008)
Grant Fritchey (10/15/2008)
OK. That's twice. It should stop now.:hehe: 😀 Great minds think alike?
I'll go do something else for a while...
Normally, my response is for the other party to stay out of my head, but considering the circumstances... I'll stay out of yours from now on. Apologies.
😀
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
October 15, 2008 at 9:27 am
You can also check to see which SQL processes are consuming the most CPU:
Select * From
(Select DateDiff(ss, Login_Time, Getdate()) as ConnectTime,
Cast( Cast(CPU as Float)/1000000 as Numeric(12,3)) as CpuSec,
Cast( (Cast(CPU as Float)/10000)
/ (DateDiff(ss, Login_Time, Getdate())) as numeric(10,6)) as [Cpu%],
*
From master..sysprocesses
--dont bother if connected less than 10 seconds:
Where DateDiff(ss, Login_Time, Getdate()) > 10
) P
Order By [CPU%] DESC
[font="Times New Roman"]-- RBarryYoung[/font], [font="Times New Roman"] (302)375-0451[/font] blog: MovingSQL.com, Twitter: @RBarryYoung[font="Arial Black"]
Proactive Performance Solutions, Inc. [/font][font="Verdana"] "Performance is our middle name."[/font]
October 15, 2008 at 10:15 am
In reply to above - yes task manager and perfmon - all show SQL Server at around 96-99% then 1%+ left to other services running.
I use idera so i can see the proc cache contents... but am unsure what i am looking for..
we have had to failover this afternoon - and the CPU is still eractic but not so critical. I will see if this happens again tomorrow also!
Oraculum
October 15, 2008 at 10:31 am
Go back and reread Gail's post. She shows you how to measure which procs are taking the longest & using up CPU. Once you identify the query or queries that are causing the problem, you examine them to see if they're doing RBAR processing or have multi-statement table valued functions or any other of the myriad poor choices that can be made in TSQL and then you begin to fix them. You can also look at the execution plans for the queries that are behaving badly & clean them up.
You said this happened suddenly. Have you made any changes to the systems or introduced new code or new sets of users recently? That could be part of the underlying cause.
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
- Theodore Roosevelt
Author of:
SQL Server Execution Plans
SQL Server Query Performance Tuning
Viewing 15 posts - 1 through 15 (of 36 total)
You must be logged in to reply to this topic. Login to reply