cpu IS PEGGED

  • MY CPU is maxed out. Is there anything I can do besides add more and bigger processeors?

    High CPU would be caused by high disk read & writes, right?

    What would use up my RAM?

    -K

  • ME again.

    I am running SQL Profiler for 3 days straight.

    Would much resources would that use up?

    Could that be hosing my CPU? Is the profiler only a resource hit when activity is causing it to log?

  • Profiler is much more expensive to use in SQL7 than in SQL2K. If you're running it on the server it adds to the load more than if you run it from another workstation. Closing it down for a few minutes and checking cpu usage via task manager/perfmon should tell you if its the culprit.

    Profiling continuously is probably more than you need. I'd recommend running for a 5-10 period and looking for specific problems - stuff with high cpu or read counts.

    High cpu is often (but not always) a symptom of memory shortages driving disk activity. If you've truly tuned the box and the queries AND the applicaton and cpu usage is still over 80% its time to upgrade.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • It also depends on where Profiler is running and what you are capturing. I would suggest stop profiler (close it, hopefully you output to a file or table). Then see if the CPU drops of and stays down. If not then refer back to your saved profiler output to look for high CPU queries. Also like Andy stated it usually points to memory shortage which if you were running Profiler on that machine not against it from another may be because the visual interface was so loaded with information.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

  • Andy,

    I am not sure what you meant by:

    "High cpu is often (but not always) a symptom of memory shortages driving disk activity. "

    Also, each server has 8GB of RAM but I have never been able to get it to use more than 2 GB. I am running SQL Enterprise and I have all the correct switches int eh boot.ini file.

    I am left to assume that our programs are not memory intensive.

  • Did you also set the AWE switch in SQL?

    From SQL BOL

    quote:


    awe enabled Option

    In Microsoft® SQL Server™ 2000, you can use the Microsoft Windows® 2000 Address Windowing Extensions (AWE) API to support up to a maximum of 64 gigabytes (GB) of physical memory. The specific amount of memory you can use depends on hardware configuration and operating system support.

    Note This feature is available only in the SQL Server 2000 Enterprise and Developer editions.

    Enabling AWE

    To enable AWE, set awe enabled to 1. SQL Server will reserve almost all available memory, leaving 128 megabytes (MB) or less, unless a value has been specified for max server memory.

    If the option has been successfully enabled, the message "Address Windowing Extension enabled" is printed in the SQL Server error log when the instance of SQL Server 2000 is started.

    awe enabled is an advanced option. If you are using the sp_configure system stored procedure to change the setting, you can change awe enabled only when show advanced options is set to 1. You must restart the instance of SQL Server 2000 for changes to take effect.

    Disabling AWE

    To disable AWE, set awe enabled to 0. This setting is the default. The AWE API is not used. SQL Server 2000 operates in a normal dynamic memory allocation mode and is limited to 3 GB of physical memory.

    Usage Considerations

    Before enabling AWE, consider the following:

    When awe enabled is set to 1, instances of SQL Server 2000 do not dynamically manage the size of the address space. SQL Server will reserve and lock almost all available memory (or the value of max server memory if the option has been set) when the server is started. It is strongly recommended that you set a value for the max server memory option each time you enable AWE. Otherwise other applications or instances of SQL Server 2000 will have less than 128 MB of physical memory in which to run.

    If the total available memory is less than 3 GB, the instance of SQL Server 2000 will be started in non-AWE mode even if awe enabled is set to 1. In this situation, you do not need to manage AWE memory because dynamic memory allocation is used automatically.

    You can determine the amount of memory you can safely allocate to instances of SQL Server 2000 by identifying how much memory is available after all other applications to be used on the computer have been started.

    Use the SQL Server Performance Monitor Total Server Memory (KB) counter to determine how much memory is allocated by the instance of SQL Server running in AWE mode. Configure the max server memory option to leave some additional memory free to allow for the varying needs of other applications and Windows 2000. For more information, see Monitoring Memory Usage.

    Important Using the awe enabled option and the max server memory setting can have a performance impact on other applications or on SQL Server running in a multi-instance or cluster environment. For more information about using AWE memory, see Managing AWE Memory.

    Example

    The following example shows how to enable AWE and configure a limit of 6 GB for max server memory:

    sp_configure 'show advanced options', 1

    RECONFIGURE

    GO

    sp_configure 'awe enabled', 1

    RECONFIGURE

    GO

    sp_configure 'max server memory', 6144

    RECONFIGURE

    GO


    You must have AWE enabled in SQL for it to take advantage, so you memory may actually be pegged from SQLs standpoint.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

  • Are you using stored procedures?

    At the moment, my development team is in the process of eliminating all stored procedures from MSSQL 2K - the reason - pegged CPUs. Our application, a data load operation, can peg an 8 way Compaq with 8 gig of RAM, effectively killing the machine. We found stored procedures to be the cause.

  • jcogle SPs should not be the general issue. SPs internal code is the only way this can be a problem. SPs offer lots of bennifits and I have heard of no issues in regards to using SPs at all. How did you make this determination or where were you given this information as it is incorrect.

    "Don't roll your eyes at me. I will tape them in place." (Teacher on Boston Public)

  • I have never heard of someone removing stored procedures because they are causing problems. Usually it is the other way around. Any code running inside a stored procedure will probably run worse when removed. Check the code inside the procs causing problems. Are you using excessive cursors? Are you doing mass updates that could be facing contention issues from other users.

    It takes some work, but you will probably find that one part of the procedure is the problem. I trick I've learned over the years to isolate why a procedure is taking too long is to start by remarking out all the code in the procedure. Then, starting from the top of the procedure re-enable the code bit by bit. You should be able to isolate the section(s) that are causing the slowdown and you can fix the SQL.

    Hope that helps.

    Frank

  • We have seen SQL's performance impacted by sp's a number of times. We run web front ends from one IIS box and a sql from a different server.

    The fix for us was to add RAM to the web server running Cold Fusion. There was nothing wrong with the sp's. It took us a while to figure this out the first time it happened!

  • It makes sense that if you move processing that was done on the client (like a cursor type op) to a stored proc you will increase utilization of the server. It's not that the sp is less efficient - in almost all cases a compiled proc should give you a performance boost (how much varies on the plan, complexity, etc), its just that now you have the box doing more work.

    You get savings in two ways with a proc. One is the plan is pre compiled compared to dynamic sql sent from the client. If you only run the proc once, you dont see much benefit! The other is that you're usually reducing round trips/encapsulating multiple steps, so the overall operation should run faster. There's no magic.

    Im not sure how I see that a proc can cause a problem if it's executing the same code that used to be sent by the client as dynamic sql (when I say dynamic, I mean a connection.execute sql rather than exec()).

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

Viewing 11 posts - 1 through 10 (of 10 total)

You must be logged in to reply to this topic. Login to reply