• Christian Buettner (9/4/2008)


    Nice question, but not "totally" deterministic in my opinion.

    What If I ensure consistency through the transaction isolation level already?

    SET TRANSACTION ISOLATION LEVEL REPEATABLE READ

    BEGIN TRANSACTION

    IF NOT EXISTS... INSERT...

    COMMIT

    In this case, both options seem to be equal(from a performance perspective) if there is

    an index on Name. If no index is defined, the first one performs better in my tests.

    Input on this is highly welcome.

    Hi Chris,

    Setting a higher transaction isolation level will indeed prevent problems from a different connection inserting the row in just the wrong moment. (I'm not sure at the moment if REPEATABLE READ suffices or if you need SERIALIZABLE though - maybe after some coffee).

    I still consider the other option the better one, because it allows the optimizer to come up with a more efficient plan. After all, the check for existence can only be done by finding the clustered index page where the row should be - and that same page has to be found for actually inserting it, so there's no need to traverse the B-tree twice. I don't know if it really works this way in any current version of SQL Server, but even if it doesn't, a future version mght implement such an optimization. And that alone is sufficent reason for me to prefer the second option.

    Great question, Sergiy!

    Best, Hugo

    PS: Of course, with SQL Server 2008, neither of these options is the "best" anymore; this is a typical scenario for the new MERGE statement. 🙂


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/