Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

CPU cycles Expand / Collapse
Author
Message
Posted Tuesday, February 25, 2014 7:22 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, April 22, 2014 2:11 AM
Points: 9, Visits: 51
Hi all,

I just wonder if someone has experience with measuring CPU cycles regarding users or applications running on a sql server 2008r2 and up ??

I'd like to have a clear overview with applications has a certain load om the CPU ....

Any input will be appriciated.

Regards,
Guus
Post #1544899
Posted Tuesday, February 25, 2014 7:28 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Monday, May 5, 2014 6:31 AM
Points: 291, Visits: 519
Do you need a true full accounting of the CPU time per application or do you just need to know in general which application uses the more CPU?
Post #1544906
Posted Tuesday, February 25, 2014 7:36 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, April 22, 2014 2:11 AM
Points: 9, Visits: 51
EricEyster,

Thanks for replying...

I need an overview on all applications and theire load on the CPU... based on this load I'd like to calculate a application/CPU price (based on the total cost of the server(s) in use) to send an invoice to each user (and their application(s) which ar running on a particular server).

Guus
Post #1544917
Posted Tuesday, February 25, 2014 7:46 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Monday, May 5, 2014 6:31 AM
Points: 291, Visits: 519
I have a wait performance script that can be run for short periods of time to estimate which user or database is consuming more resources, but it is still an estimate and does not indicate how much more could be done by the server. If the number of applications is not changing, it could be used to assign a relative weight for back billing customers.

The detailed approach via a trace can be very expensive on active systems, like an OLTP server.
Post #1544929
Posted Tuesday, February 25, 2014 8:02 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 6:16 AM
Points: 15,628, Visits: 28,011
In general, you're going to have a hard time quantifying this precisely. The issue is that you're relying on one of two pieces of information, the login and the application in the connection string. The problem with the login is that so many people and applications share a single login, sometimes even the 'sa' login. That makes it difficult to differentiate queries from say, the DBA and a given application if both are using the 'sa' login. The application value in the connection string is completely optional. Some will fill it in. Some won't.

But, what you can do to try for this is capture the query metrics (I'd probably use Extended Events, but you can use Trace) and then generate reports based on login and application, but with the understanding that these are likely to be less than completely accurate.

You can also toss in filters by database which is likely to differentiate by application too.


----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #1544943
Posted Wednesday, February 26, 2014 4:15 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Tuesday, April 22, 2014 2:11 AM
Points: 9, Visits: 51
Thanks all ...

I already know that the accuracy, when using the CPU-cycles, is an issue and the way to capture it is difficult.

One of the options is to use a trace (another than the default trace - this one is not containing the appropriate info) and stream it into a table using just the right trace-parameters.
When loaded into the table I wil generate a data-report periodically, safe this data in a report-table, and empty the trace table based upon a time-stamp.

Using the CPU (as part of the trace data captured) iso CPU-cycles is another option and pretty accurate regarding CPU use.....

Guus
Post #1545318
Posted Wednesday, February 26, 2014 4:24 AM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 6:16 AM
Points: 15,628, Visits: 28,011
The only part of that statement I would disagree with is outputting trace data to a table. I wouldn't recommend that. I would output it to a file (or more accurately, a series of files) and then load that into a table. I know the tool can output directly to a table, but that doesn't make it the most efficient mechanism.

----------------------------------------------------
"The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood..." Theodore Roosevelt
The Scary DBA
Author of: SQL Server 2012 Query Performance Tuning
SQL Server 2008 Query Performance Tuning Distilled
and
SQL Server Execution Plans

Product Evangelist for Red Gate Software
Post #1545324
Posted Wednesday, February 26, 2014 5:47 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Monday, May 5, 2014 6:31 AM
Points: 291, Visits: 519
Grant Fritchey (2/26/2014)
The only part of that statement I would disagree with is outputting trace data to a table. I wouldn't recommend that. I would output it to a file (or more accurately, a series of files) and then load that into a table. I know the tool can output directly to a table, but that doesn't make it the most efficient mechanism.


You can also cause performance issues by sending the output back to your production server. I would recommend loading and analyzing your data somewhere else if possible. We have used SQL Express, which is free, on old hardware, which was free, as long as the daily purge can keep the data size below the limits.
Post #1545346
Posted Wednesday, February 26, 2014 5:56 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Today @ 7:03 AM
Points: 42,764, Visits: 35,860
Server-side trace to files on a fast local drive is my preference. Less overhead than using the Profiler GUI no matter whether the GUI is sending stuff to screen or table elsewhere.

I've crashed servers using the Profiler GUI before.

Files can then be copied elsewhere and imported for analysis. I usually use my local machine for that.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #1545349
Posted Thursday, February 27, 2014 3:32 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, August 8, 2014 9:44 AM
Points: 168, Visits: 428
Another way to approach this is to use the cached query stats to estimate it. This has a similar caveat to most suggestions on this topic, which is that it's not guaranteed to be completely accurate.

Having said that, if the applications each have their own database, you can get some idea by running a query like the following:

WITH DB_CPU_Stats  
AS
(
SELECT
DatabaseID,
DB_Name(DatabaseID) AS [DatabaseName],
SUM(total_worker_time) AS [CPU_Time_Ms]
FROM sys.dm_exec_query_stats AS qs
CROSS APPLY
(
SELECT
CONVERT(int, value) AS [DatabaseID]
FROM sys.dm_exec_plan_attributes(qs.plan_handle)
WHERE attribute = N'dbid'
) AS F_DB
GROUP BY DatabaseID
)
SELECT
ROW_NUMBER() OVER(ORDER BY [CPU_Time_Ms] DESC) AS [row_num],
DatabaseName,
[CPU_Time_Ms],
CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms]) OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUPercent]
FROM DB_CPU_Stats
ORDER BY row_num

That will show you how much CPU time each database is responsible for, relative to total use on the server, based on queries still in cache.

It should be noted that this is percentage of total use, so even if SQL Server is only pushing the server's CPU to 10%, these numbers will still add up to 100%.

Combining the output of the above query with overall CPU utilization on the machine should give you a decent rough idea of how much each database is responsible for.

Hope this helps!
Post #1546153
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse