Unusually Expensive Deletes

  • Hi all,

    I have a scenario on a few of our servers that I can't quite explain that I'm hoping someone can help me with. We have a Dynamics CRM system that we routinely need to delete single records out of. The problem is, deleting a single record causes an enourmous number of read operations that I can't quite rationalise. For example, in SQL Server Profiler we can see something along the lines of the following come in during the delete:

    DELETE FROM tblCase Where CaseId = [guid]
    DELETE FROM tblPermissions WHERE ObjectId = [guid]
    DELETE FROM tblCaseHistory WHERE CaseId = [guid]

    All of these tables can be quite large - anywhere from 1 million to 30 million rows. However each query is SARGable, indexes are highly (but not perfectly) selective, are not fragmented and statistics quality is high.
    Nevertheless , the query results in many millions of Read operations as reported by profiler.

    Can anyone tell me what could account for the Reads in a well functioning system? The only thing I can think of is that SQL Server is doing some work I'm not aware of to check referential integrity in some table not mentioned in the query directly.

    Thanks in advance for any tips

    Simon

  • When you do a delete (same principle applies to inserts and updates as well), you’re actually doing separate deletes for every index on each of the affected tables.

    If any of the affected tables has a large number indexes, that could easily account for a higher than expected read count.

    The other thing to consider is foreign key constraints. SQL Server has to read the reference table to insure that the delete isn’t going to cause a DRI violation. Depending on the number of FKs, that could also account for the described behavior.

    I would suspect that you’re dealing with a combination of the two...

  • Hi Jason

    Yeah - I pretty much convinced myself last night that it can only be because of Foreign Keys not being indexed. Its the only think I can think of that makes sense. 
    The issue I was having was in profiler, it doesn't actually tell you where the Read cost was incurred - so the command will say DELETE FROM Cases - 7 million reads. But it doesn't tell you that essentially all those reads came via an assortment of tables that Cases is related to.

    Anyway - long story short - once I convinced myself that this was the only remaining possibility, I managed to find a way to identify all the missing indexes on all of the relevant FK columns. In case this helps anyone else you can take two approaches:

    In profiler, enable the Show Plan All event so you can see the execution plan of the problematic deletes

    OR - and for the record I am *not* recommending this in a production system*

    Use Management Studio to:
    - Generate a fake DELETE command with a gibberish ID that can't delete anything
    - Wrap the whole thing in a BEGIN TRAN/ROLLBACK block
    - Run the Query with "Include Actual Execution Plan" enabled
    The plan will show you in which tables you took the cost. If you are lucky, SQL Server will even tell you which indexes were missing in the related tables so you can alternate between running the command and creating the indexes to watch the improvement

    Hope that helps
    S
    * Just to reiterate - I am not recommending you run actual deletes on a production system! If you try it - be prepared to live with the consequences!

  • There’s no reason to use Profiler for this.

    Simply do a “CTRL+L” in SSMS, to get the estimated execution plan or “CTRL+M” if you need the actual plan.

    Alternatively, you can use SET SHOWPLAN XML ON (for estimated plan) or SET STASTICS XML ON (for actual plan)... if you prefer TSQL scripting over using SSMS functionality.

    Note: You DO NOT need to actually execute your query to get its estimated plan... This feature really comes in handy when you want to know how SQL Server is going to execute a given query BEFORE you actually execute the query.

  • Just in case - prior to a particular version of CRM (2013 or 2015 I think) all foreign keys were defined as not for replication and were would not be used in queries. when doing deletes this could be quite bad also.
    Also many of the FK's do not have indexes on them which again is quite bad

    If the records you are deleting are activities then links to/from activitypointer and activityparty will most likely also kill you.

    (1-30 million rows - quite small - some of my MS CRM Tables have 500 Milion rows - 2.8 TB CRM)

  • sh856531 - Saturday, November 25, 2017 1:21 PM

    Hi all,

    I have a scenario on a few of our servers that I can't quite explain that I'm hoping someone can help me with. We have a Dynamics CRM system that we routinely need to delete single records out of. The problem is, deleting a single record causes an enourmous number of read operations that I can't quite rationalise. For example, in SQL Server Profiler we can see something along the lines of the following come in during the delete:

    DELETE FROM tblCase Where CaseId = [guid]
    DELETE FROM tblPermissions WHERE ObjectId = [guid]
    DELETE FROM tblCaseHistory WHERE CaseId = [guid]

    All of these tables can be quite large - anywhere from 1 million to 30 million rows. However each query is SARGable, indexes are highly (but not perfectly) selective, are not fragmented and statistics quality is high.
    Nevertheless , the query results in many millions of Read operations as reported by profiler.

    Can anyone tell me what could account for the Reads in a well functioning system? The only thing I can think of is that SQL Server is doing some work I'm not aware of to check referential integrity in some table not mentioned in the query directly.

    Thanks in advance for any tips

    Simon

    May also depend on data types on each side of the equal sign.  I say that because we, unfortunately and by necessity, store guids as varchar(36) strings.

  • The excessive reads probably have to do with triggers, indexes, and foreign key constraints. It's also possible that the number of reads are reasonable but it's just getting a lot of block contention when attempting to write the deletes. I've never had any experience with this specific product, but it wouldn't surprise me if there were cursor(s) and additional delete operation(s) embedded inside the trigger(s) of the table you're attempting to delete from.

    If there is a stored procedure that normally performs the delete operation from the application, then I'd recommend executing the procedure instead of attempting ad-hoc deletes, so you know you're following all the required steps and in the proper sequence, otherwise you may end up corrupting the logical consistency of the tables, and do it after hours when there are no other users active.

    As a side note; I've never been impressed by the database design of mainstream ISV applications like PeopleSoft or Microsoft CRM, Master Data Services, or SharePoint. This sort of thing comes with the territory.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

Viewing 7 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply