• Actually, rereading the thread, I see that Peso did post the size of his dataset; 2,000,000 rows.

    It would be interesting to determine a relative cut off point where the row_number trick is the most efficient and where the two-bite approach becomes best.

    For my purposes, the biggest cost was the query itself. The underlying tables hold hundreds of millions of records, but the data is pretty well filtered down by the time we get to the record set to page. To avoid running that query twice was a big win.

    I deliberately avoided including time statistics in my comparisons because they can be very subjective (unless they demonstrate a clear performance difference as a few posters noted).

    SQL guy and Houston Magician