Slows system performance

  • using sp_spaceused

    I get -

    DB size - 10011.13 MB

    Unallocated space - 288.22 MB

    Data - 3779736 KB

    Index - 5912608 KB

    Unused - 140264 KB

    Reserved - 9832608 KB

    (Note: I am doing performance testing in my db where i have only 2 tables.)

  • Can you run this please?

    EXEC sp_spaceused 'mytable'

    - change the table name to the name of the table you are deleting from.

    Also, run this:

    DBCC SQLPERF ( LOGSPACE )

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • Result of EXEC sp_spaceused 'tablename'

    Rows = 12309864

    REserverd = 6976112 KB

    Data = 5038728 KB

    Index = 1795848 KB

    Unused = 141536 KB

    Result of DBCC SQLPERF ( LOGSPACE )

    Log size 7584.117

    Log space used = 97.98335

  • Last of all, can you post the actual batch please? Change the table name if you need to.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • BEGIN TRY

    CREATE TABLE #aaa

    (

    Id BIGINT IDENTITY(1,1) NOT NULL PRIMARY KEY ,

    xyzmasterCode VARCHAR(15) NOT NULL )

    INSERT INTO#aaa (xyzmasterCode,NumberOfRows)

    SELECT DISTINCT xyzmasterCode, COUNT(*)

    FROM xyz

    WHERE xyzmasterCode BETWEEN '0' AND 'Z'

    AND filterDate <= @filterDate

    GROUP BY xyzmasterCode

    SET @Id = 0

    WHILE 1=1

    BEGIN

    BEGIN TRAN

    DELETE a

    FROM xyz a,#aaa

    WHERE a.xyzmasterCode = #aaa.xyzmasterCode

    AND #aaa.Id between @Id and @Id + 100

    AND filterDate <= @filterDate

    DELETE J

    FROM xyzmaster J,#aaa

    WHERE J.Code = #aaa.xyzmasterCode

    AND#aaa.Id between @Id and @Id + 100

    COMMIT TRANSACTION;

    CHECKPOINT

    IF @DeleteRowcount <=0

    BREAK

    SET @Id = @Id + 100

    END

    END TRY

  • That looks ok to me, other than the joins to the temp table not being optimized. You don't really need the temp table anyway:

    SET @RowsDeleted = 1000;

    WHILE @RowsDeleted > 0

    BEGIN

    BEGIN TRAN;

    DELETE TOP(1000) s

    FROM [Experimental].[dbo].[SampleData] s

    WHERE xyzmasterCode BETWEEN '0' AND 'Z'

    AND filterDate <= @filterDate;

    SET @RowsDeleted = @@ROWCOUNT;

    COMMIT TRAN;

    CHECKPOINT;

    BEGIN TRAN;

    DELETE TOP(1000) s

    FROM [Experimental].[dbo].[SampleData] s

    WHERE xyzmasterCode BETWEEN '0' AND 'Z';

    SET @RowsDeleted = @RowsDeleted + @@ROWCOUNT;

    COMMIT TRAN;

    CHECKPOINT;

    END

    You could run it as one transaction spanning both deletes or as two as shown -I'd try both.

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • ChrisM@Work (4/15/2013)


    That looks ok to me, other than the joins to the temp table not being optimized. You don't really need the temp table anyway:

    SET @RowsDeleted = 1000;

    WHILE @RowsDeleted > 0

    BEGIN

    BEGIN TRAN;

    DELETE TOP(1000) s

    FROM [Experimental].[dbo].[SampleData] s

    WHERE xyzmasterCode BETWEEN '0' AND 'Z'

    AND filterDate <= @filterDate;

    SET @RowsDeleted = @@ROWCOUNT;

    COMMIT TRAN;

    CHECKPOINT;

    BEGIN TRAN;

    DELETE TOP(1000) s

    FROM [Experimental].[dbo].[SampleData] s

    WHERE xyzmasterCode BETWEEN '0' AND 'Z';

    SET @RowsDeleted = @RowsDeleted + @@ROWCOUNT;

    COMMIT TRAN;

    CHECKPOINT;

    END

    You could run it as one transaction spanning both deletes or as two as shown -I'd try both.

    Add a waitfor delay statement after the commit. This will help to reduce the blocking.

  • baabhu (4/17/2013)


    ChrisM@Work (4/15/2013)


    That looks ok to me, other than the joins to the temp table not being optimized. You don't really need the temp table anyway:

    SET @RowsDeleted = 1000;

    WHILE @RowsDeleted > 0

    BEGIN

    BEGIN TRAN;

    DELETE TOP(1000) s

    FROM [Experimental].[dbo].[SampleData] s

    WHERE xyzmasterCode BETWEEN '0' AND 'Z'

    AND filterDate <= @filterDate;

    SET @RowsDeleted = @@ROWCOUNT;

    COMMIT TRAN;

    CHECKPOINT;

    BEGIN TRAN;

    DELETE TOP(1000) s

    FROM [Experimental].[dbo].[SampleData] s

    WHERE xyzmasterCode BETWEEN '0' AND 'Z';

    SET @RowsDeleted = @RowsDeleted + @@ROWCOUNT;

    COMMIT TRAN;

    CHECKPOINT;

    END

    You could run it as one transaction spanning both deletes or as two as shown -I'd try both.

    Add a waitfor delay statement after the commit. This will help to reduce the blocking.

    Can you explain what you mean by this, baabhu? I can understand why a delay might be useful - removing obsolete log data isn't instantaneous - but I don't understand what this has to do with blocking. Cheers.


    [font="Arial"]Low-hanging fruit picker and defender of the moggies[/font]

    For better assistance in answering your questions, please read this[/url].


    Understanding and using APPLY, (I)[/url] and (II)[/url] Paul White[/url]

    Hidden RBAR: Triangular Joins[/url] / The "Numbers" or "Tally" Table: What it is and how it replaces a loop[/url] Jeff Moden[/url]

  • Hi

    From your results it seems you have much schema locks.

    Just had similar scenario yesterday, without locks but with high cpu. Some update scripts were running for 'forever'. We stopped them, rebuilt the indexes and run again and they passed well. Maybe will help you.

    Regards

    IgorMi

    Igor Micev,My blog: www.igormicev.com

  • IgorMi (4/21/2013)


    Just had similar scenario yesterday, without locks but with high cpu. Some update scripts were running for 'forever'. We stopped them, rebuilt the indexes and run again and they passed well.

    That was stale statistics resulting in a sub-optimal exec plan (probably estimating one row and getting thousands or millions). Next time just update stats, don't waste time doing a full index rebuild.

    p.s. All queries take shared schema locks when they run, this is normal behaviour. Hence having lots of schema locks is not a problem (unless you have a bunch of schema mod locks too)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • That was stale statistics resulting in a sub-optimal exec plan (probably estimating one row and getting thousands or millions). Next time just update stats, don't waste time doing a full index rebuild.

    You're right. We did that because there were no other options at the moment, and we needed it immediately.

    p.s. All queries take shared schema locks when they run, this is normal behaviour. Hence having lots of schema locks is not a problem (unless you have a bunch of schema mod locks too)[/quote]

    Igor Micev,My blog: www.igormicev.com

  • Below is what I think to be a simplified version of what you're attempting to accomplish. It involves making a single indexed scan / delete against the transactional and master tables, so I expect it would perform better.

    -- For purpose of this example, create xyzmaster table.

    drop table xyzmaster;

    create table xyzmaster (xyzmasterCode char(1) not null primary key);

    -- For purpose of this example, create xyz table.

    drop table xyz;

    create table xyz (xyzmasterCode char(1) not null, filterDate datetime not null);

    -- Also, it's important to have an index on xyzmasterCode:

    create index ix_xyzmasterCode on xyz (xyzmasterCode);

    -- Create a temp table to hold deleted codes. It's important to have

    -- an index on xyzmasterCode. Also, we're making xyzmasterCode unique,

    -- because that's all we need and will keep this recordset small.

    -- The ignore_dup_key property insures xyzmasterCode is unique.

    drop table #DeletedCodes;

    create table #DeletedCodes (xyzmasterCode char(1) not null );

    create unique nonclustered index [uk_xyzmasterCode]

    on #DeletedCodes (xyzmasterCode)

    with (IGNORE_DUP_KEY = on);

    -- Supply the date parameter:

    declare @filterDate datetime = getdate ();

    -- Delete transactional records from xyz.

    -- Note that a distinct list of deleted codes will be inserted into #DeletedCodes.

    delete

    from xyz

    output deleted.xyzmasterCode

    into #DeletedCodes(xyzmasterCode)

    where (xyzmasterCode between '0' and 'Z')

    and filterDate <= @filterDate;

    -- Next, delete from xyzmaster all inactive codes:

    delete xyzmaster

    from xyzmaster

    join #DeletedCodes

    on #DeletedCodes.xyzmasterCode = xyzmaster.xyzmasterCode;

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 12 posts - 31 through 41 (of 41 total)

You must be logged in to reply to this topic. Login to reply