best isolation level to use in a long delete insert query

  • hi all, i use a single stored procedure to update many tables in sql server 2014 database, using defalut transaction isolation level we got random performance issues.

    maybe it would better to use read uncommitted isolation level?

    what's happens ,in the both cases( read committed, uncommitted) if the sp is called at the same time passing the same @key parameter?

    this is a sample to show of the real stored procedure works:

    CREATE PROCEDURE SP_Test

    @key int,

    @Values1 Values1 readonly,

    @Values2 Values2 readonly,

    @Values3 Values3 readonly

    AS

    BEGIN

    BEGIN TRANSACTION

    DELETE FROM TABLE1 WHERE KEY=@Key

    DELETE FROM TABLE2 WHERE KEY=@Key

    DELETE FROM TABLE3 WHERE KEY=@Key

    Insert Into TABLE1(KEY,FIELD)

    Select @key , FIELD FROM @Values1

    Insert Into TABLE2(KEY,FIELD)

    Select @key , FIELD FROM @Values2

    Insert Into TABLE3(KEY,FIELD)

    Select @key , FIELD FROM @Values3

    COMMIT TRANSACTION

    END

  • Why do a delete followed by an insert, instead of just doing an update? What is the nature of your random performance issues? Only use READ UNCOMMITTED if you can tolerate wrong data caused by dirty reads and so on. It's equivalent to using the NOLOCK hint - if you do a search on that, you'll find plenty of material that explains the pitfalls.

    John

  • usally the stored procedure take from 1to 5 sec to execute, but apparentuly with no reason some time the execution time grows till 1 minute.

    delete insert is 2 operation, i think other way is most expensive:

    1) insert the rows from parameter that are not in the table

    2) update the rows that are both in table and parameter

    3) delete the rows that are in table but not in parameter

    i'll try to set the isolation to read uncommitted.

  • The isolation level's irrelevant. Isolation level affects what locks read queries take and how long they're held for. Since the only selects are part of the insert and are from table variables, it'll make no difference whatsoever what isolation level you choose (other then serialisable, which will take additional locks)

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • fabriziodb (9/23/2015)


    delete insert is 2 operation, i think other way is most expensive:

    1) insert the rows from parameter that are not in the table

    2) update the rows that are both in table and parameter

    3) delete the rows that are in table but not in parameter

    I think "it depends".

    Are most of your calls to the Sproc:

    Data exists (i.e. will be both DELETE and INSERT)

    or mostly Data is new (i.e. will be only Insert)

    Your case (3) to delete stale data is not actually represented in your code example above. (But I do understand that that is required to "synchronise" data). How often does scenario 3 happen?

    EDIT: Re-reading your code I can now see that you might be deleted X-rows matching @key and inserting back Y-rows in their place, perhaps not all of which match on all clustered index or primary key columns.

    If you do UPDATE and the "width" of the data does not change, and some/many of the columns do not change (in particular columns which are present in non-clustered indexes, and assuming that you do have some such indexes) then update will be MUCH faster than DELETE / INSERT.

    You could use MERGE which may well give you the best possible performance.

    i'll try to set the isolation to read uncommitted.

    That would absolutely be my last choice, unless data quality / integrity was not important (I don't think I've ever been in a situation where it wasn't, but that won't be true for ever situation)

  • isolation level is irrilevant also in delete operation? i thought before deleting it need a select to identify the records to delete.

  • Yup. It's a delete, it has to take exclusive locks and hold them to the end of the transaction.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • I think it depends a bit on how complex your code is (I assume your example is a trivial example compared to the actual code), but I don't know for sure as we never ever do that and I'm sure Gail will clarify, but if you did:

    DELETE FROM MyTable AS D

    WHERE D.ID IN (SELECT SomeID FROM SomeOtherTable WITH (NOLOCK) WHERE SomeConiditon)

    then I assume that READ UNCOMMITTED would apply to SomeOtherTable (whether it explicitly has a NOLOCK hint, or whether READ UNCOMMITTED isolation level is implicit at that point.

  • Kristen-173977 (9/23/2015)


    fabriziodb (9/23/2015)


    delete insert is 2 operation, i think other way is most expensive:

    1) insert the rows from parameter that are not in the table

    2) update the rows that are both in table and parameter

    3) delete the rows that are in table but not in parameter

    I think "it depends".

    Are most of your calls to the Sproc:

    Data exists (i.e. will be both DELETE and INSERT)

    or mostly Data is new (i.e. will be only Insert)

    Your case (3) to delete stale data is not actually represented in your code example above. (But I do understand that that is required to "synchronise" data). How often does scenario 3 happen?

    EDIT: Re-reading your code I can now see that you might be deleted X-rows matching @key and inserting back Y-rows in their place, perhaps not all of which match on all clustered index or primary key columns.

    If you do UPDATE and the "width" of the data does not change, and some/many of the columns do not change (in particular columns which are present in non-clustered indexes, and assuming that you do have some such indexes) then update will be MUCH faster than DELETE / INSERT.

    You could use MERGE which may well give you the best possible performance.

    i'll try to set the isolation to read uncommitted.

    That would absolutely be my last choice, unless data quality / integrity was not important (I don't think I've ever been in a situation where it wasn't, but that won't be true for ever situation)

    thanks, i think in the most cases the new records are matching the old ones (same keys, different values), update occurs in most cases.

    When this applications was starting some years ago i was considering the use of merge statment but i read about many bugs, i dont know if now merge is safe and fully working.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply