Slows system performance

  • Dear All

    I am running a process (updation/deletion/insert) of 2 million records. After some time this process makes my system very slow.

    How do i find out whats the issue.

    In task manager it shows

    CPU usage max as 15

    memory as 4.14 gb

    Regards

    Kirshan1

  • Check wait stats while the operation is running

  • How to check this?

  • I found this article a helpful introduction to what Wait States are, how to check them, and most importantly how to interpret the results 🙂

    http://www.sqlskills.com/blogs/paul/wait-statistics-or-please-tell-me-where-it-hurts/[/url]

  • From this website, download a book called Troubleshooting SQL Server: A Guide for the Accidental DBA by Jonathan Kehayias and Ted Krueger

    Chapter 1.

  • Krishna1 (4/4/2013)


    Dear All

    I am running a process (updation/deletion/insert) of 2 million records. After some time this process makes my system very slow.

    How do i find out whats the issue.

    In task manager it shows

    CPU usage max as 15

    memory as 4.14 gb

    Regards

    Kirshan1

    Post the estimated plan for folks to have a look now, and then the actual execution plan when you can.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • I have run folloing query from the site mention

    WITH [Waits] AS

    (SELECT

    [wait_type],

    [wait_time_ms] / 1000.0 AS [WaitS],

    ([wait_time_ms] - [signal_wait_time_ms]) / 1000.0 AS [ResourceS],

    [signal_wait_time_ms] / 1000.0 AS [SignalS],

    [waiting_tasks_count] AS [WaitCount],

    100.0 * [wait_time_ms] / SUM ([wait_time_ms]) OVER() AS [Percentage],

    ROW_NUMBER() OVER(ORDER BY [wait_time_ms] DESC) AS [RowNum]

    FROM sys.dm_os_wait_stats

    WHERE [wait_type] NOT IN (

    N'CLR_SEMAPHORE', N'LAZYWRITER_SLEEP',

    N'RESOURCE_QUEUE', N'SQLTRACE_BUFFER_FLUSH',

    N'SLEEP_TASK', N'SLEEP_SYSTEMTASK',

    N'WAITFOR', N'HADR_FILESTREAM_IOMGR_IOCOMPLETION',

    N'CHECKPOINT_QUEUE', N'REQUEST_FOR_DEADLOCK_SEARCH',

    N'XE_TIMER_EVENT', N'XE_DISPATCHER_JOIN',

    N'LOGMGR_QUEUE', N'FT_IFTS_SCHEDULER_IDLE_WAIT',

    N'BROKER_TASK_STOP', N'CLR_MANUAL_EVENT',

    N'CLR_AUTO_EVENT', N'DISPATCHER_QUEUE_SEMAPHORE',

    N'TRACEWRITE', N'XE_DISPATCHER_WAIT',

    N'BROKER_TO_FLUSH', N'BROKER_EVENTHANDLER',

    N'FT_IFTSHC_MUTEX', N'SQLTRACE_INCREMENTAL_FLUSH_SLEEP',

    N'DIRTY_PAGE_POLL', N'SP_SERVER_DIAGNOSTICS_SLEEP')

    )

    SELECT

    [W1].[wait_type] AS [WaitType],

    CAST ([W1].[WaitS] AS DECIMAL(14, 2)) AS [Wait_S],

    CAST ([W1].[ResourceS] AS DECIMAL(14, 2)) AS [Resource_S],

    CAST ([W1].[SignalS] AS DECIMAL(14, 2)) AS [Signal_S],

    [W1].[WaitCount] AS [WaitCount],

    CAST ([W1].[Percentage] AS DECIMAL(4, 2)) AS [Percentage],

    CAST (([W1].[WaitS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgWait_S],

    CAST (([W1].[ResourceS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgRes_S],

    CAST (([W1].[SignalS] / [W1].[WaitCount]) AS DECIMAL (14, 4)) AS [AvgSig_S]

    FROM [Waits] AS [W1]

    INNER JOIN [Waits] AS [W2]

    ON [W2].[RowNum] <= [W1].[RowNum]

    GROUP BY [W1].[RowNum], [W1].[wait_type], [W1].[WaitS],

    [W1].[ResourceS], [W1].[SignalS], [W1].[WaitCount], [W1].[Percentage]

    HAVING SUM ([W2].[Percentage]) - [W1].[Percentage] < 95; -- percentage threshold

    GO

    the result is

    PAGEIOLATCH_SH 48.96%

    SLEEP_BPOOL_FLUSH 30.46%

    PAGEIOLATCH_EX 13.40%

    IO_COMPLETION 4.05%

    When my process is going on, its very difficult to open any other winow/exploer ....

  • Krishna1 (4/7/2013)


    ...When my process is going on, its very difficult to open any other winow/exploer ....

    Are you running this process on <local>?

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Yes. I have installed Server is on my laptop and doing some testing

  • What are your autogrowth settings? If you're not sure about the effect autogrowth can have on performance and how and why to change the settings, this excellent article [/url]by Greg Larsen will take you a long way.

    The article includes a script for looking at autogrowth events which have already occurred. Run it and check the results - you may have a quick win here.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • thanks for the link it was realy informative. I changed the setting after reading the artical.

    My log fie is increasing a lot. 10MB every 8-9 seconds. What may be the reason.

    I am reading the records from million rows and set updating one column in the same table at a time max of 10000 rows. This update is in begin tran and commit tran then why should the log file increas?

    Note i have one index which has the updateable column in its inlcude list.

  • Your transaction log is currently too small to accommodate logging of the changes you are making to your tables in this series of statements.

    Check which recovery model are you using. If your local instance is for training / testing, you probably don't need logging for backup, in which case setting your recovery model to Simple is likely to help. For a detailed look at transaction logs and an explanation of why this is the case, try this article by Gail Shaw[/url].

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Thank for the link. Auto growth was 10% every few second it was increasing the log. After i chnaged it to few mb the perfeomce has improved.

    i have following code -

    while 1=1

    begin

    begin tran

    delete top 1000

    where primarkkeycol = @variable

    out out to deletehistory

    if @@row count < 1000

    break

    commit tran

    end

    Out table(delete history) is also in the same noticed that in this case it now increasing temp db log and data. if log. if i am deleting records and adding same infor in other table why should the data log should increase

    Also there is commit tran after 1000 records in process breaks then it takes long time to recover.

  • Which recovery model is your db set to?

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • recovery model - simple

Viewing 15 posts - 1 through 15 (of 41 total)

You must be logged in to reply to this topic. Login to reply