Log File Full

  • Am trying to delete records from a 13Gb table, as it has grown too big. Every time I try to delete a large number of records I get "The log file for database 'tempdb' is full....." error. Have backed up and truncated the transaction log as suggested and still no go. I just want delete the records!!! All suggestions appreciated.

    Denis

  • If you need to delete everything from the table then do a truncate table, it is minimally logged so it shouldn't fill up the log. If you wish to maintain the records I would recommend doing select <desired fields> into <new table> where <expression to keep records you want>

    then truncate table <13gb tablename>

    then do a insert <fields> into from <new table>

    The insert into will be logged, but if it is a minimal dataset it shouldn't be a problem.

    If it is a big insert, then you may want to bcp your data out, truncate the table and bcp the data back in.

  • You are indicating that the TempDB Log is full, are you doing something to fill it.

    Issuing a large select to identify the records to Delete ??

    If so possibly consider a loop "Select Topp 1000 (???), then delete, until rows returned = 0.

    KlK, MCSE


    KlK

  • You need to do your deletes in smaller batches. Here's some information about it and a script....

    First - gotta give credit where credit is due.

    This explaination and script are from another tech site (www.tek-tips.com - MS SQL Server-Programming forum FAQ) and written by Terry Broadbent.

    It is often necessary to perform mass updates, inserts, or deletes on a table or tables. If the action involves thousands to millions of rows, performance can bog down and the transaction log can grow very large, consuming much available disk space. In fact, updating millions of rows in one query may not even complete because the transaction log fills all available disk space and can no longer grow. Another problem occurs when log growth cannot not keep up with the space needed for the update. SQL will often error indicating the log is full when in fact there is plenty of disk space.

    It is important to understand why this happens. SQL Server must keep all transactions in the transaction log until they are committed. This is necessary for rollback to occur in the event of an error. When thousands or millions of rows are updated in one transaction, all the information required for rollback is retained in the transaction log until the transaction commits or rolls back. Storing millions of rows in the log can cause it to grow large.

    One way to overcome this problem and greatly improve performance of mass update queries is to perform the updates, inserts or deletes in smaller batches. Each batch should be enclosed in a transaction and the transaction committed after the specified number of rows have been processed. This is the same technique used by the DTS bulk insert task when the BatchSize property is set. (BTW, I highly recommend setting the BatchSize property when importing very large files into SQL Server tables.)

    Here are samples of updates, deletes and inserts in batch transactions. In each example, the batch size is set at 50K. Another value may be optimal for different queries.

    --Set UpdFlag in MyTable to 0 where KeyCol matches

    --primary key in ControlTable and UpdFlag not = 0.

    --Set rowcount to 50000 to limit number of updates

    --performed in each batch to 50K rows.

    Set rowcount 50000

    --Declare variable for row count

    Declare @rc int

    Set @rc=50000

    While @rc=50000

    Begin

    Begin Transaction

    --Use tablockx and holdlock to obtain and hold

    --an immediate exclusive table lock. This unusually

    --speeds the update because only one lock is needed.

    Update MyTable With (tablockx, holdlock)

    Set UpdFlag = 0

    From MyTable mt

    Join ControlTable ct

    On mt.KeyCol=ct.PK

    --Add criteria to avoid updating rows that

    --were updated in previous pass

    Where m.UpdFlag <> 0

    --Get number of rows updated

    --Process will continue until less than 50000

    Select @rc=@@rowcount

    --Commit the transaction

    Commit

    End

    --Delete rows older than 90 days

    --Set rowcount to 50000 to limit number of deletes per batch

    Set rowcount 50000

    --Declare variable for row count

    Declare @rc int

    Set @rc=50000

    While @rc=50000

    Begin

    Begin Transaction

    --Use tablockx and holdlock to obtain and hold

    --an immediate exclusive table lock. This unusually

    --speeds the update because only one lock is needed.

    Delete MyTable With (tablockx, holdlock)

    Where InsertDate < dateadd(d,-90,getdate())

    --Get number of rows updated

    --Process will continue until less than 50000

    Select @rc=@@rowcount

    --Commit the transaction

    Commit

    End

    -----------------------------------------------------

    --Insert rows into MyTable from ImportTable

    --where key value doesn’t already exist.

    --Set rowcount to 50000 to limit number of inserts per batch

    Set rowcount 50000

    --Declare variable for row count

    Declare @rc int

    Set @rc=50000

    While @rc=50000

    Begin

    Begin Transaction

    --Use tablockx and holdlock to obtain and hold

    --an immediate exclusive table lock. This usually

    --speeds the insert because only one lock is needed.

    Insert MyTable (KeyCol, Col1, Col2, Col3) With (tablockx, holdlock)

    Select i.ID, i.ColA, i.ColB, i.ColC

    From ImportTable i

    Left Join MyTable m

    On i.ID=m.KeyCol

    Where m.KeyCol Is Null

    --Get number of rows updated

    --Process will continue until less than 50000

    Select @rc=@@rowcount

    --Commit the transaction

    Commit

    End

  • Breaking the job down to reasonably small chunks is the key so that the transaction can be committed. When deleting records from out activity log on a monthly basis, I get my start and end markers and then put them in a WHILE LOOP statement and delete 250 at a time with a 5 second delay at the bottom of the loop before I up do it again. Transaction log size is no problem, and I also don't hog the system. If I need to kill the delete or update then only the last 250 updates/deletes are rolled back, not all of them.

  • Use TRUNCATE TABLE command, this will delete the records without logging.

  • Truncate table is a good suggestion, another thing to think about is if your tempdb is filling up, you might need to raise the increment by which it grows.

    HTH

    Ray Higdon MCSE, MCDBA, CCNA

    ------------

    Ray Higdon MCSE, MCDBA, CCNA

  • You can tray to put the database in simple recovery mode while you are deleting the records be sure that there are no other activity while you are doing this

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply