• Numerous people have noted that the "holy grail" pattern does not work when there are duplicate values in the sort column.

    a simple work around for this is:

    With CTE as (

    select *, row_number() OVER (ORDER BY nonunique ASC) seq

    from MyTable

    ),

    with Reverse as

    (

    select *, row_number() OVER (ORDER BY seq DESC) tot

    )

    select fields, seq+tot-1 as totalRecs

    from Reverse

    In my tests, this did not add any noticeable overhead over the standard "Holy Grail" pattern.

    I find this does work well for small sets, and contrary to another posters statement, I disagree that all your SQL should be optimized to 1 million+ records. You should use the correct SQL and indexes for your workload, and you should have a good understanding of your workload, expected growth, access pattern, read / write ratio ,etc...

    If I have a system that has little data, but a high transaction count, wicked processors and a slow disk sub-system, this pattern is ideal. For large amounts of data, this pattern has been clearly shown to break down and would not necessarily be the correct choice.

    I'd be interested in seeing differences in performance with real-world examples of complex queries, as paging is typically used on search results, and search queries rarely, if ever, read from a single table as most of the examples both for and against this pattern have dictated.