CPU cycles

  • Hi all,

    I just wonder if someone has experience with measuring CPU cycles regarding users or applications running on a sql server 2008r2 and up ??

    I'd like to have a clear overview with applications has a certain load om the CPU ....

    Any input will be appriciated.

    Regards,

    Guus

  • Do you need a true full accounting of the CPU time per application or do you just need to know in general which application uses the more CPU?

  • EricEyster,

    Thanks for replying...

    I need an overview on all applications and theire load on the CPU... based on this load I'd like to calculate a application/CPU price (based on the total cost of the server(s) in use) to send an invoice to each user (and their application(s) which ar running on a particular server).

    Guus

  • I have a wait performance script that can be run for short periods of time to estimate which user or database is consuming more resources, but it is still an estimate and does not indicate how much more could be done by the server. If the number of applications is not changing, it could be used to assign a relative weight for back billing customers.

    The detailed approach via a trace can be very expensive on active systems, like an OLTP server.

  • In general, you're going to have a hard time quantifying this precisely. The issue is that you're relying on one of two pieces of information, the login and the application in the connection string. The problem with the login is that so many people and applications share a single login, sometimes even the 'sa' login. That makes it difficult to differentiate queries from say, the DBA and a given application if both are using the 'sa' login. The application value in the connection string is completely optional. Some will fill it in. Some won't.

    But, what you can do to try for this is capture the query metrics (I'd probably use Extended Events, but you can use Trace) and then generate reports based on login and application, but with the understanding that these are likely to be less than completely accurate.

    You can also toss in filters by database which is likely to differentiate by application too.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thanks all ...

    I already know that the accuracy, when using the CPU-cycles, is an issue and the way to capture it is difficult.

    One of the options is to use a trace (another than the default trace - this one is not containing the appropriate info) and stream it into a table using just the right trace-parameters.

    When loaded into the table I wil generate a data-report periodically, safe this data in a report-table, and empty the trace table based upon a time-stamp.

    Using the CPU (as part of the trace data captured) iso CPU-cycles is another option and pretty accurate regarding CPU use.....

    Guus

  • The only part of that statement I would disagree with is outputting trace data to a table. I wouldn't recommend that. I would output it to a file (or more accurately, a series of files) and then load that into a table. I know the tool can output directly to a table, but that doesn't make it the most efficient mechanism.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant Fritchey (2/26/2014)


    The only part of that statement I would disagree with is outputting trace data to a table. I wouldn't recommend that. I would output it to a file (or more accurately, a series of files) and then load that into a table. I know the tool can output directly to a table, but that doesn't make it the most efficient mechanism.

    You can also cause performance issues by sending the output back to your production server. I would recommend loading and analyzing your data somewhere else if possible. We have used SQL Express, which is free, on old hardware, which was free, as long as the daily purge can keep the data size below the limits.

  • Server-side trace to files on a fast local drive is my preference. Less overhead than using the Profiler GUI no matter whether the GUI is sending stuff to screen or table elsewhere.

    I've crashed servers using the Profiler GUI before.

    Files can then be copied elsewhere and imported for analysis. I usually use my local machine for that.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Another way to approach this is to use the cached query stats to estimate it. This has a similar caveat to most suggestions on this topic, which is that it's not guaranteed to be completely accurate.

    Having said that, if the applications each have their own database, you can get some idea by running a query like the following:

    WITH DB_CPU_Stats

    AS

    (

    SELECT

    DatabaseID,

    DB_Name(DatabaseID) AS [DatabaseName],

    SUM(total_worker_time) AS [CPU_Time_Ms]

    FROM sys.dm_exec_query_stats AS qs

    CROSS APPLY

    (

    SELECT

    CONVERT(int, value) AS [DatabaseID]

    FROM sys.dm_exec_plan_attributes(qs.plan_handle)

    WHERE attribute = N'dbid'

    ) AS F_DB

    GROUP BY DatabaseID

    )

    SELECT

    ROW_NUMBER() OVER(ORDER BY [CPU_Time_Ms] DESC) AS [row_num],

    DatabaseName,

    [CPU_Time_Ms],

    CAST([CPU_Time_Ms] * 1.0 / SUM([CPU_Time_Ms]) OVER() * 100.0 AS DECIMAL(5, 2)) AS [CPUPercent]

    FROM DB_CPU_Stats

    ORDER BY row_num

    That will show you how much CPU time each database is responsible for, relative to total use on the server, based on queries still in cache.

    It should be noted that this is percentage of total use, so even if SQL Server is only pushing the server's CPU to 10%, these numbers will still add up to 100%.

    Combining the output of the above query with overall CPU utilization on the machine should give you a decent rough idea of how much each database is responsible for.

    Hope this helps!

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply