Faster way to delete?

  • I am deleting 1.2 million rows from a table that contains 2.4 million rows. I used a delete statement in QA with a where clause for 3 particular years. The year along with month make up the clustered index.

    The delete has been running for almost 3 hours.

    Why is this taking so long and is there any way to speed it up?

    I have select into/bulk copy property set to true.

    Next time I will break it up into smaller time periods to delete.

    Thanks for any help.

    Randy

  • the short answer is "Logging"

    try to delete in "small" chuncks like 5000 rows at a time


    * Noel

  • You could also try to move the records you want to keep into a temp table, truncate the real table, and then move the records back from the temp table.

    select *

    into #temp

    from tablea

    where cola=2

    truncate table tablea

    insert into tablea

    select * from #temp

    drop table #temp

    Brian

  • That works, but you have to remember to script out all the indexes, constraints, etc, then put back when you're done.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • As noeld says, the key is in minimising the size of your transaction (and therefore the amount of data which must be logged to your transaction log before the data can actually be deleted).

    SQL 2000 allows you to use either SET ROWCOUNT or to use the TOP clause in an associated SELECT statement to restrict the number of rows deleted at one time.

    For more information, see the SET ROWCOUNT and DELETE topics in Books Online.

    Finally, it's probably worth giving a little bit of consideration to the logging mode you are using when deleting large amount of data - if you're not using simple logging, remember that your transaction log will probably grow to half the size of your main database if you're deleting half the data in the database!

    --

    Si Chan

    Database Administrator

  • Depending on how often your log backups run. If you're in simple mode and you carve the delete up into 5 chunks, you need enough space for 1 chunk in the log. If you're not in simple mode, you need at least enough space for 1 chunk, probably more, unless you run the log backup as soon as the delete completes.

    Andy

    http://www.sqlservercentral.com/columnists/awarren/

  • Thanks for all the responses.

    I'm running SQL7.

    The Delete task rolled back after 3 hours and the net result was zero rows were deleted.

    Therefore, I deleted the 3 years of data at 3 months at a time (about 100,000 rows). These transactions took about 8 - 11 minutes each. It grew my transaction log to 7GB.

    I backed up the Data and Log file and left the autoshrink on overnite and now my .MDF has shrunk but my log is still 7GB. I ran the DBCC SHRINKFILE on the log but it hasn't budged yet. I know there has been alot written on shrinking tranlogs in the past and I will refer back when I get a chance.

    I was really hoping there was an unlogged delete available to a sysadmin that deletes large numbers of rows at warp speed.

    Thanks again.

  • I have also done this with a BCP by BCPing out the rows I wish to keep using a view or a select statement, truncating the table, and then BCPing in the rows which were BCPed out.

  • I may try the BCP solution as I have several more test tables that need reduced.

    I am also thinking of scripting my indexes and dropping them, deleting the rows all at once, and then shrinking and reindexing. The table I deleted from was heavily indexed and SQL was probably going crazy updating pointers.

  • What about the fragmentation in this table your truncating? If it is very fragmented will the delete take longer? I think it would. If you have a table thats as large as this one but is not fragmented at all then the delete would run faster.


    -Isaiah

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply