Deleting large number of rows from a table & a heap in a VLDB

  • Great article Nakul.

    Agree with all the comments about using smallish batch sizes to keep row level locks from escalating! Also, if the delete(s) involve hash matching, keeping to a small batches minimising the chances of a collision.

    When multiple tables must be deleted from to preserve transaction integrety, I've had some success with cascading deletes but have also had some horrors when the database design isn't sufficently normalised to make it work well.

    One point it would be worth adding to your article is how the purging gets implemented. While it can be run in dead time (typically overnight) not every database has this option nowadays. And if purging hasn't been done for ages then disk resources may just not be suffcient!

    What we've done with two of our large DB is to use the Resource Governor to limit the impact of the purge job and we just have it running all the time. It chuggs away in the background when system resources are available and grinds to a halt when they aren't. If you don't have SQL2008 and don't have the Resource Governor then about the only option(?) you have is to keep the batch size small, the queries well optimised and insert waits of 10 seconds or more to minimise the impact.

    Running purging jobs 24/7 has another benefit in that log files tend to stay roughly the same size whereas big overnight/weekly/monthly purging often pushes the log sizes way beyond normal growth sizes.

  • table locks are the killer here...

    Lock escallation happens @5000 rows, keep batch sizes below this

    select 1

    while @@rowcount > 0

    delete top 4999 from ...

    (or something similar)

    Also you're using SQL2008+. read up on "Filtered Indexes"

    Create a filtered index on your main table that contains your PK, and filters on your delete criteria. This index will be used to determine your deletion candidates, and the overhead should be minimal as the delete candidates should only be a small subset of the bulk of your data.

    You can even monitor the size/rowcount of the Index to trigger when to start the purge - to keep the performance hit to a minimum. (or delay the start to an off-peak time) rather than using a scheduled date/time.

  • rob.lobbe-964963 (3/24/2011)


    Also you're using SQL2008+. read up on "Filtered Indexes"

    Create a filtered index on your main table that contains your PK, and filters on your delete criteria. This index will be used to determine your deletion candidates, and the overhead should be minimal as the delete candidates should only be a small subset of the bulk of your data.

    Good point! Sometimes the filtered index is a better approach than using a table, temporary or otherwise, to store PK values for transaction that are to be deleted.

    BTW ... How many DBA's get "completed" applications from development teams that have absolutely no databases purging logic at all!?! I have no idea how they get away with this.

  • belgarion (3/24/2011)


    Great article Nakul.

    Agree with all the comments about using smallish batch sizes to keep row level locks from escalating! Also, if the delete(s) involve hash matching, keeping to a small batches minimising the chances of a collision.

    When multiple tables must be deleted from to preserve transaction integrety, I've had some success with cascading deletes but have also had some horrors when the database design isn't sufficently normalised to make it work well.

    One point it would be worth adding to your article is how the purging gets implemented. While it can be run in dead time (typically overnight) not every database has this option nowadays. And if purging hasn't been done for ages then disk resources may just not be suffcient!

    What we've done with two of our large DB is to use the Resource Governor to limit the impact of the purge job and we just have it running all the time. It chuggs away in the background when system resources are available and grinds to a halt when they aren't. If you don't have SQL2008 and don't have the Resource Governor then about the only option(?) you have is to keep the batch size small, the queries well optimised and insert waits of 10 seconds or more to minimise the impact.

    Running purging jobs 24/7 has another benefit in that log files tend to stay roughly the same size whereas big overnight/weekly/monthly purging often pushes the log sizes way beyond normal growth sizes.

    Wow now this is a good idea. Thanks for sharing!

    -------------------------------------------------------------------------------------------------
    My SQL Server Blog

  • Instead of chasing up purge with a frequent large delete, partition table at design time is a good option. The criteria you use in delete (whereclause) can be used for partition table key columns.

    See my SQLSaturday presentation.

  • Hello!

    Thank-you all for your interest, and valuable feedback.

    Some of the points mentioned in the feedback are really great. I will research them and update the article as necessary.

    However, for some suggestions (filtered indexes, using partitioned tables, etc) are all great in hindsight. As mentioned, this is based upon a live example, and the schema was in place and no changes were allowed (at least in the case of the heap). That being said, I agree that if a purging solution is being designed from scratch, yes, these are some of the features that absolutely should be used.

    Thanks & Regards,
    Nakul Vachhrajani.
    http://nakulvachhrajani.com

    Follow me on
    Twitter: @sqltwins

  • Thanks for the article, and the discussion, there is a lot of good information here.

  • I liked the article. I'm a bit confused by your numbers though. You said make note of the original data and log file sizes and then show the before and after file sizes after the delete was performed. There's a significant difference in the original file sizes to the before and after file sizes. I'm not sure what's happening there. Can you give me a little more explanation on those?

  • Read It (3/29/2011)


    I liked the article. I'm a bit confused by your numbers though. You said make note of the original data and log file sizes and then show the before and after file sizes after the delete was performed. There's a significant difference in the original file sizes to the before and after file sizes. I'm not sure what's happening there. Can you give me a little more explanation on those?

    Hello!

    Good to know that you liked reading my article. I can definitely help you out in understanding the difference in the file sizes. Please find the explaination below:

    Case #1 - Deleting Random Data from a table

    Initially, we generated our test data, and noted the file sizes. The data and log files came out to 2694MB and 1705MB respectively.

    Next, we generated the lookup table, and then executed the purge. The "Before" and "After" values are with respect to the Purge operation, and hence contain the space occupied by the lookup table.

    The above also applies to Case #2 - Deleting data from a heap (non-clustered table).

    The basic point I was trying to make is that the file sizes remain constant during the purge operation, and hence I have taken the file size measurements accordingly.

    Do let me know if you still have any doubts, and I will be more than happy to help you out.

    Thanks & Regards,
    Nakul Vachhrajani.
    http://nakulvachhrajani.com

    Follow me on
    Twitter: @sqltwins

  • I thought that might be the case, but didn't want to assume that. Thank you for the help.

  • First of all, great article and very insightful so thanks for taking the time to write it. I do have one question about the approach you explained for deleting large quantities of data from tables with clustered index -

    Assuming I understood correctly, you are saying that on a daily basis (or some periodicity), move the data you would like deleted into a look-up table. Then, once a week, join the look-up table to the table that data needs to be deleted from and perform delete. And subsequently truncate the lookup table or something. So my question is - in your implementation of this, have you encountered any blocks on that big table that the data needs to be deleted from during the join? What if that big table is frequently being used by other processes? I was planning to try this out and I am somewhat of a novice so I thought I'd check before breaking anything 🙂

    Also, out of curiosity, I noticed that the look-up table that you created for deleting data from a clustered-index table itself had no clustered index, simply a non-clustered index on the column "UnixDateToDelete". Was this done for a specific reason?

    Thanks again for the article!

  • APP_SQL (4/3/2011)


    First of all, great article and very insightful so thanks for taking the time to write it. I do have one question about the approach you explained for deleting large quantities of data from tables with clustered index -

    Assuming I understood correctly, you are saying that on a daily basis (or some periodicity), move the data you would like deleted into a look-up table. Then, once a week, join the look-up table to the table that data needs to be deleted from and perform delete. And subsequently truncate the lookup table or something. So my question is - in your implementation of this, have you encountered any blocks on that big table that the data needs to be deleted from during the join? What if that big table is frequently being used by other processes? I was planning to try this out and I am somewhat of a novice so I thought I'd check before breaking anything 🙂

    Also, out of curiosity, I noticed that the look-up table that you created for deleting data from a clustered-index table itself had no clustered index, simply a non-clustered index on the column "UnixDateToDelete". Was this done for a specific reason?

    Thanks again for the article!

    Hello, APP_SQL!

    Thank-you for your feedback, and I am happy that you liked reading my article.

    As far as the concept goes, yes, you have understood correctly. Now, about your questions:

    Q1. Whether or not I have encountered any blocks on the tables during the periodic purge cycles?

    A1. Our is an on-premise system, and hence, we execute the purge during the weekly IT maintenance window provided to us by the customer (it's a configurable SQL job that does the purge). The window typically varies from 2 to 4 hours, and hence we have to be in and out of the system in about an hour for IT to do the rest of their maintenace. Because they happen during the maintenance window, all interfaces are down and hence, we have not had any blocking issues.

    There have been cases where we had to execute the purge online, and even then we did not face any major blocking issues.

    As an alternative, you may want to partition your table, and set the lock escalation to AUTO. What this will do is ask SQL Server to escalate locks to the partition, and not to the entire table.

    Q2. Why did I use non-clustered index on the lookup table as opposed to a clustered index?

    A2. No specific reason as such. Generally speaking, you can use a clustered index on the lookup table as well - no harm at all (in fact, your deletes may be even faster). In our case, we did not want to enforce any constraints or establish any relationships with the lookup table, and hence you will see that no PK-FK has been used. If your design is such that you can allow for a relationship to exist, please go ahead and use the clustered index by creating the "UnixDateToDelete" as Primary Key.

    I hope that I was able to answer your questions satisfactorily. If there is anything else I can help you out with, do let me know.

    Thanks & Regards,
    Nakul Vachhrajani.
    http://nakulvachhrajani.com

    Follow me on
    Twitter: @sqltwins

  • Thanks Nakul! You have answered all my questions. Thanks for the response and once again, thanks for the article.

  • A great article and constructive discussion!

    I do learn a lot of brilliant methods to delete very large table, but not just simply to make a where clause and wait for it to complete.

  • I really liked the lookup-table technique for deleting data from a clustered index.

    Will definitely suggest that to our dev teams going forward.

    Thank you for the article!

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

Viewing 15 posts - 16 through 30 (of 30 total)

You must be logged in to reply to this topic. Login to reply