• Alright, so with all the different ways of doing this, I decided to run some tests and see which way would be most beneficial.

    I set out to try 1 million (raise pinky finger to lips) rows of data. However, my data generator failed after 127k ish rows. So I used them to test all three of the presented ways of doing this. I had 10,000 groups in 127,971 rows of data. When I ran the statements a couple of things jumped out at me. First, it seemed that the execution plans for Nitin's Alter / Update / Select were coming in drastically under the other plans. I made a small change to Nigel's plan, and ran it against Nitin's.

    Nitin's plan came in at 20% of the batch (12% for the Alter / Update, 8% for Select), while Nigel's plan came in with the remaining 80%. I also noticed that Nitin's plan, for 127k records, was returning with a time right around 4.5 seconds, while the modified Nigel's plan was coming in just over half a second. (Atif's plan was coming in close to Nitin's, so I stopped looking there.)

    While I'm no expert on performance, I usually check things like this out if I'm not sure which solution to choose. My question is this: What matters more,in large sets of data, execution time, or execution cost?