Delete doing Clustered index scan instead of Seek

  • I have a system from which I need/want to purge data on a regular basis. This needs to happen quickly as it is a an active OLTP application and I need to make sure I am not clogging up the system and/or creating deadlocks. The issue is thaonce my delete statement gets above about 14000 rows I am getting a clustered index scan instead of a a seek and the execution time goes from under 30 seconds to over 90 seconds. I am wondering if there is a way to get the query optimizer to seek instead of scan. Here is the code:

    Create Table

    #tests

    (

    test_id

    Int,

    comment_id

    Int

    )

    Create Clustered Index

    IX_tests On #tests(test_id, comment_id)

    If

    @start_date Is Null

    Begin

    Select

    @start_date =

    Min(event_date),

    @end_date =

    DateAdd(HH, 12, Min(event_date))

    From

    local_event_facts

    End

    Select suser_sname

    () as run_by, @start_date as start, @end_date as end_date

    Insert Into

    #tests

    (

    test_id,

    comment_id

    )

    Select

    T.test_id,

    T.comment_id

    From

    tests T

    Join

    variables V

    On

    T.var_id = V.var_id

    Join

    prod_units PU

    On

    V.pu_id = PU.pu_id

    Join

    prod_lines PL

    ON

    PU.pl_id = PL.pl_id

    Where

    T.result_on

    Between @start_date and @end_date And

    Pl.pl_desc

    Like 'Paper Machine%'

    Order By

    T.test_id,

    T.comment_id

    Begin Transaction

    del_test_hist

    --Select H.*

    Delete H

    From

    test_history H

    Join

    #tests T

    On

    H.test_id = T.test_id

    Select @@ROWCOUNT as test_history

    Commit Transaction

    del_test_hist

    Begin Transaction

    del_tests

    --Select T.*

    Delete T

    From

    tests T

    Join

    #tests TT

    On

    T.test_id = TT.test_id

    --Where

    -- T.result_on Between @start_date And @end_date

    Select @@ROWCOUNT as tests

    Commit Transaction

    del_tests

    Begin Transaction

    del_comments

    Update C

    Set comment = '',

    comment_text =

    Null,

    ShouldDelete = 1

    From

    comments C

    Join

    #tests T

    On

    C.comment_id = T.comment_id

    Where

    T.comment_id

    Is Not Null

    Select @@ROWCOUNT as test_comments

    Commit Transaction

    del_comments

    RETURN

    test_id is the clustered PK on tests and test_history. I am using the temp table so I only have to hit the tests table once. The tests table has about 20709950 rows and test_history has about 12476259 rows. I am working on a SQL2k box in my test environment. My production server is SQL 7.0. Any help to get the processing time down will be much appreciated. I have tried everything I know.

  • Although I do not have time to analyze your code in depth, I have used the following technique for purging:

    While exists (...stuff you want to purge...)

    begin

    set rowcount 100     -- experiment with this value

    delete  ...stuff you want to purge...

    waitfor delay '00:00:03' -- 3 seconds, experiment with this value too

    end

    In general you don't want big transactions while purging - it isn't necessary for integrity (if the server fails halfway through, so what, just start again), and, big delete transactions hold exclusive locks for the duration.

    The above code only deletes 100 rows at a time, and lets other processes do their thing in between batches of 100... 

Viewing 2 posts - 1 through 1 (of 1 total)

You must be logged in to reply to this topic. Login to reply