I agree with Jack Corbett's post, you need to look at the locking impact. Many years ago a team I worked with came to the opposite conclusion due to the extra locks we encountered with update 1st, insert 2nd approach. That was SQL7, I'm sure the analysis is no longer valid. Easy to check in profiler (be sure to add the extra profiler columns ObjectID, ObjectID2, IntegerData2, Type). Such analysis might support the author's claim, but definitely needs to be considered.
I don't like the READUNCOMMITTED hint. There is no TRY/CATCH used here, so if two connections attempt to upsert same new row at approximately the same time, 1st one will succeed, and 2nd one will abort on failed insert. Either need to CATCH that failed insert, or else take steps to ensure 2nd connection is blocked from working on the same row that the 1st connection is working on.
In old days before TRY/CATCH, a robust solution would have mandated that we wrap the upsert code within a transaction. Seems like classic case that calls for a transaction, right? Multiple sequential SQL stmts that form a single unit of work. Now with TRY/CATCH it might be better to attempt the INSERT first, as chrismj's post points out, and I believe this may eliminate the necessity for the explicit transaction? I haven't fully absorbed impacts of TRY/CATCH, and old habits die hard...I'll have to play around with this. Also, I wonder if there is a cost of failing into the CATCH block that needs to be considered? I've never read about this or how to measure it...maybe a stopwatch would suffice.
MERGE should render all this moot, wish it were already here.