How to stop Transaction log from growing during Delete

  • cdi_h

    SSC Veteran

    Points: 270

    SQL Server 7.0 database question: To free up some disk space, we are deleting old records (not all records but some of them based on a date) from the database which are no needed any more and are not important. I had noticed that the transaction log is growing when I ran the delete statment. As I had mentioned earlier, these records which are being deleted are not important . Is there a way to stop transaction log from growing while I run the delete statement.

    The deletion of records has to be done on multiple databases on different servers. I am trying to find a way to do this,

    Please help me!!

    Thanks

     

     

  • Antares686

    SSC Guru

    Points: 125444

    No, the Transaction Log will grow even if you have set to truncate on checkpoint. This is to ensure the DELETEs can all be rolled back should an error occurr in the batch.

    You can however slightly control the growth a bit but do a while and use ROWCOUNT to control how many are deleted each batch run and run multiple batches.

    DECLARE @x int

    SET ROWCOUNT 1000

    SET @x = 1

    WHILE @x > 0

    BEGIN

     DELETE FROM table WHERE conditions

     SET @x = @@ROWCOUNT

    END

    This will run a 1000 record per delete batch and @x controls the reoccurance until no more records will delete based on the condition. This will allow if you have truncate on checkpoint to occurr between each delete of 1000 recors which will keep the Tl size controled just a bit. they are still logged but the truncate will roll forward the changes and allow the next transaction to write over the same space.

    There can still be exceptions where the TL will not truncate but should do the trick.

  • Mitch Miller

    Right there with Babe

    Points: 769

    The option above is what we used to a) limit the size of our log, and b) keep from blocking the entire table during large deletes (we were deleting nearly 1/2 of the data in our table). For us, that turned out to be the best solution.


    --Mitch

  • cdi_h

    SSC Veteran

    Points: 270

    Is this the way it should be done?

    DECLARE @x int

    SET ROWCOUNT 1000

    SET @x = 1

    WHILE @x > 0

       EXEC sp_dboption 'EJ','Trunc. Log On Chkpt','TRUE'EGIN  

     DELETE FROM table WHERE conditions

     SET @x = @@ROWCOUNT

      EXEC sp_dboption 'EJ','Trunc. Log On Chkpt','FALSE'

    END

    Please let me know

    Thanks

     

  • Antares686

    SSC Guru

    Points: 125444

    As you are wanting to remian in Truncate on Checkpoint the entire time do like so.

    Maybe do a fullback to file beforehand for safety in case you find an error after the fact.

    EXEC sp_dboption 'EJ','Trunc. Log On Chkpt','TRUE'

    DECLARE @x int

    SET ROWCOUNT 1000

    SET @x = 1

    WHILE @x > 0

    BEGIN

     DELETE FROM table WHERE conditions

     SET @x = @@ROWCOUNT

    END

    EXEC sp_dboption 'EJ','Trunc. Log On Chkpt','FALSE'

    Then I would perform a full backup immediately to ensure you have a good copy of the backup on hand.

  • mark worrall

    Grasshopper

    Points: 13

    I had the same problem last week, and tried various techniques as described above. I was given a piece very useful advice by our DBA....

    ...if you can, remove the clustered index on any table records are being deleted from if you can, and put it back when you're finished. When the clustered index is there, each individual delete causes the physical records to be shuffled up on disc to ensure they are all physically contiguous, which is a huge performance overhead on mass deletes.

    Mark

    Sydney

     

     

  • Antares686

    SSC Guru

    Points: 125444

    I had the same problem last week, and tried various techniques as described above. I was given a piece very useful advice by our DBA....

    ...if you can, remove the clustered index on any table records are being deleted from if you can, and put it back when you''re finished.

    When the clustered index is there, each individual delete causes the physical records to be shuffled up on disc to ensure they are all physically contiguous,

    which is a huge performance overhead on mass deletes.

    Mark,

    I disagree here.

    The reality of delets do not cause the pages to adjust in any way.

    In fact the gaps are just left which means you have available space left in all affected pages and actually the data is still there if you look at with special commands the slot is marked free, unless a page is completely deallocated.

    For example if this were the layout

    Page 1

    a

    b

    c

    d

    e

    Page 2

    f

    g

    h

    i

    j

    Page 3

    k

    l

    m

    n

    o

    Page 4

    p

    q

    r

    s

    t

    Page 5

    u

    v

    w

    x

    y

    Page 6

    z

    And I issued a DELETE where Alpha in (b, c, g, i, n, o, z) And Alpha Between p AND t Then this is what is happens to the page

    Page 1

    a

    -- b is marked deleted and recoverable

    -- c is marked deleted and recoverable

    d

    e

    Page 2

    f

    -- g is marked deleted and recoverable

    h

    -- i is marked deleted and recoverable

    j

    Page 3

    k

    l

    m

    -- n is marked deleted and recoverable

    -- o is marked deleted and recoverable

    -- Page 4 is marked recoverable

    -- p is marked deleted and recoverable

    -- q is marked deleted and recoverable

    -- r is marked deleted and recoverable

    -- s is marked deleted and recoverable

    -- t is marked deleted and recoverable

    Page 5 -- (Technically becomes Page 4 but it is poitner based so the names really mean nothing)

    u

    v

    w

    x

    y

    -- Page 6 is marked recoverable

    -- z is marked deleted and recoverable

    In the case of indexes with deletes the indexes root and leaf pages may have slots and pages marked freed but the do not move data at all.

    Now afterwarss issue a DBCC SHOWCONTIG and you will see fragmentation on the indexes and data pages.

    By issuing a DBCC DBREINDEX or DBCC INDEXDEFRAG you can reclaim this lost space.

    What the issue is is the logging of the transactions and the log file needing to grow periodically on large amount of deletes.

    That is what the issue is here.

    However that said if auto update statistics is on, then at the end of the delete cycle(s) you may experience a slow down if the number of deletes causes this to occurr. You may want to set off during the while handled deletes for an improvement and do them yourself when doen with sp_updatestats then turn back on. Also after a large number of deletes running DBCC UPDATEUSAGE can be helpfull to performance.

  • HildaJ

    SSCarpal Tunnel

    Points: 4454

    Antares686  I appreciate this article, many years after this post I found it and I just applied that solution in our stage environment.  Working like a charm.

    Thank you.

     

Viewing 8 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply