Delete statement hangs

  • wzline - Monday, September 10, 2018 4:45 PM

    Jeff Moden - Monday, September 10, 2018 6:06 AM

    wzline - Sunday, September 9, 2018 10:28 PM

    Jeff Moden - Sunday, September 9, 2018 10:54 AM

    wzline - Saturday, September 8, 2018 3:41 PM

    Jeff Moden - Saturday, September 8, 2018 3:15 PM

    wzline - Saturday, September 8, 2018 3:00 PM

    restored the database and now can see the code for deleting:

           DELETE dbo.<table_name1> WITH (READPAST)
            FROM dbo.<table_name1> IPA
                JOIN dbo.[table_name2] AU WITH (NOLOCK) ON IPA.AuditID = AU.AuditID
                JOIN @RowsToBeDeleted TI ON AU.AccountID = TI.Item

    Why are you joining to table_name2?  @RowsToBeDeleted is a table valued function that contains the AuditID's to delete, correct?

    Also, you restarted the delete job after most of the rows you wanted to delete have been deleted.  That's left gaping holes in your table_name1 table and your stats on that table are likely a complete train wreck.  If it were me, I'd rebuild the Clustered Index on table_name1 (and I'm assuming that AuditID is the Clustered Index on that table), rebuild the content of @RowsToBeDeleted , and then try again without the join to table_name2, which I currently see absolutely no reason for the join to it in any of this.

    Also, since you're in the SINGLE USER mode, change the WITH (READPAST) to WITH(TABLOCK).

    Hi Jeff,

    Great point on table_name2. I will carefully go through the code provided by vendor during the day.

    Do you recommend Single User mode? As the application server will be stopped, I don't think it's necessary. Single User mode makes trouble shooting harder.

    If the only thing using the table is that one application and that application has been stopped, then the SINGLE USER mode is unnecessary.

    I'll also suggest that 20,000 rows is serious underkill. 

    Is this the largest table in your database or do you have others of this size?

    Thanks Jeff.
    This morning I got time to review the stored procedures line by line. So far what I noticed is that indexes on one of the tables are very fragmented. 
    Also "PRINT" is used in the SP, so the output is delayed, thus misleading.
    As the code is provided by vendor, I will rebuild indexes in test env and re-run the sp; might ask vendor to change the sp to drop indexes and rebuild them after delete. Will also mention 20,000 rows is an underkill.
    I will keep this post updated.
    Thanks again.

    Just in case you don't know, you can use RAISERROR instead of print.  It'll print ever line up to about 500 lines and then it will go into a "wait for 100 lines to accumulate and then print all 100 lines" mode.

    RAISERROR('somemessage here',0,0) WITH NOWAIT:

    The "0,0" thing means "No Severity" and a second "0" just as a place holder).

    You can also use a variable if you need to...

    RAISERROR(%s,0,0,@SomeVariable) WITH NOWAIT:

    ... where the "%s" is a "string placeholder" for @SomeVariable.

    Hi Jeff,

    Thanks for the reminder. Yes I realised that Raiserror buffers lines after 500 lines. It is indeed annoying. I will ask vendor either reduce the lines or generate lines more frequently so I know the progress.

    Never realised this until had this problem. Some people say other tool might not have the limit, like linqpad. Beside SSMS, do you have any other tool to recommend?

    I don't know for sure but a command line run via SQLCMD might do the trick.  Of course, writing to a table that you can read from anytime would help and act as a log, as well.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden - Monday, September 10, 2018 8:01 PM

    wzline - Monday, September 10, 2018 4:45 PM

    Jeff Moden - Monday, September 10, 2018 6:06 AM

    wzline - Sunday, September 9, 2018 10:28 PM

    Jeff Moden - Sunday, September 9, 2018 10:54 AM

    wzline - Saturday, September 8, 2018 3:41 PM

    Jeff Moden - Saturday, September 8, 2018 3:15 PM

    wzline - Saturday, September 8, 2018 3:00 PM

    restored the database and now can see the code for deleting:

           DELETE dbo.<table_name1> WITH (READPAST)
            FROM dbo.<table_name1> IPA
                JOIN dbo.[table_name2] AU WITH (NOLOCK) ON IPA.AuditID = AU.AuditID
                JOIN @RowsToBeDeleted TI ON AU.AccountID = TI.Item

    Why are you joining to table_name2?  @RowsToBeDeleted is a table valued function that contains the AuditID's to delete, correct?

    Also, you restarted the delete job after most of the rows you wanted to delete have been deleted.  That's left gaping holes in your table_name1 table and your stats on that table are likely a complete train wreck.  If it were me, I'd rebuild the Clustered Index on table_name1 (and I'm assuming that AuditID is the Clustered Index on that table), rebuild the content of @RowsToBeDeleted , and then try again without the join to table_name2, which I currently see absolutely no reason for the join to it in any of this.

    Also, since you're in the SINGLE USER mode, change the WITH (READPAST) to WITH(TABLOCK).

    Hi Jeff,

    Great point on table_name2. I will carefully go through the code provided by vendor during the day.

    Do you recommend Single User mode? As the application server will be stopped, I don't think it's necessary. Single User mode makes trouble shooting harder.

    If the only thing using the table is that one application and that application has been stopped, then the SINGLE USER mode is unnecessary.

    I'll also suggest that 20,000 rows is serious underkill. 

    Is this the largest table in your database or do you have others of this size?

    Thanks Jeff.
    This morning I got time to review the stored procedures line by line. So far what I noticed is that indexes on one of the tables are very fragmented. 
    Also "PRINT" is used in the SP, so the output is delayed, thus misleading.
    As the code is provided by vendor, I will rebuild indexes in test env and re-run the sp; might ask vendor to change the sp to drop indexes and rebuild them after delete. Will also mention 20,000 rows is an underkill.
    I will keep this post updated.
    Thanks again.

    Just in case you don't know, you can use RAISERROR instead of print.  It'll print ever line up to about 500 lines and then it will go into a "wait for 100 lines to accumulate and then print all 100 lines" mode.

    RAISERROR('somemessage here',0,0) WITH NOWAIT:

    The "0,0" thing means "No Severity" and a second "0" just as a place holder).

    You can also use a variable if you need to...

    RAISERROR(%s,0,0,@SomeVariable) WITH NOWAIT:

    ... where the "%s" is a "string placeholder" for @SomeVariable.

    Hi Jeff,

    Thanks for the reminder. Yes I realised that Raiserror buffers lines after 500 lines. It is indeed annoying. I will ask vendor either reduce the lines or generate lines more frequently so I know the progress.

    Never realised this until had this problem. Some people say other tool might not have the limit, like linqpad. Beside SSMS, do you have any other tool to recommend?

    I don't know for sure but a command line run via SQLCMD might do the trick.  Of course, writing to a table that you can read from anytime would help and act as a log, as well.

    Yes, SQLCMD is another way to avoid the RAISERROR buffer problem.

    Further investigation found that some FK's on child tables don't have index created. I have sent email to vendor and waiting for feedback.

Viewing 2 posts - 16 through 17 (of 17 total)

You must be logged in to reply to this topic. Login to reply