I created a TallyNumber table with only one column (ascending clustered primary key) and 2,000,001 records, ranging from 0 to 2,000,000.
First I tried your suggestion in the article, which gave me the correct records back
set statistics io on
set statistics time on
DECLARE @startRow INT ; SET @startrow = 50
;WITH cols
AS
(
SELECT number,
ROW_NUMBER() OVER(ORDER BY number) AS seq,
ROW_NUMBER() OVER(ORDER BY number DESC) AS totrows
FROM tallynumbers
)
SELECT number, totrows + seq -1 as TotRows
FROM cols
WHERE seq BETWEEN @startRow AND @startRow + 49
ORDER BY seq
set statistics time off
set statistics io off
and gave me these results
(50 row(s) affected)
Table 'TallyNumbers'. Scan count 1, logical reads 3225, physical reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0.
SQL Server Execution Times:
CPU time = 6107 ms, elapsed time = 6059 ms.
Setting totrows to constant 1
set statistics io on
set statistics time on
DECLARE @startRow INT ; SET @startrow = 50
;WITH cols
AS
(
SELECT number,
ROW_NUMBER() OVER(ORDER BY number) AS seq,
1 as totrows
FROM tallynumbers
)
SELECT number, totrows + seq -1 as TotRows
FROM cols
WHERE seq BETWEEN @startRow AND @startRow + 49
ORDER BY seq
set statistics time off
set statistics io off
gave me these results
(50 row(s) affected)
Table 'TallyNumbers'. Scan count 1, logical reads 3, physical reads 0.
SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.
So for a large table you will have to scan all records anyway with your suggestion.
Modifying your code to this
set statistics io on
set statistics time on
DECLARE @startRow INT ; SET @startrow = 50
;WITH cols
AS
(
SELECT number,
ROW_NUMBER() OVER(ORDER BY number) AS seq
FROM tallynumbers
)
SELECT number, (select count(*) from cols) as TotRows
FROM cols
WHERE seq BETWEEN @startRow AND @startRow + 49
ORDER BY seq
set statistics time off
set statistics io off
gave me these results
(50 row(s) affected)
Table 'TallyNumbers'. Scan count 4, logical reads 3250, physical reads 0.
SQL Server Execution Times:
CPU time = 360 ms, elapsed time = 191 ms.
So it's not all about reads. It's about "customer satisfaction". My last suggestion runs in less than 0.2 seconds with 3250 reads, and your suggestion runs in 6.1 seconds and 3225 reads, on my single-column 2,000,001 record table.
The reads only went up by a measly 0.78%, but time went down by a staggering 96.8% !
Doing this all over again, but using @StartRow = 5000 gave almost same time for your solution because all records are to be numbered. Same amount of reads too.
(50 row(s) affected)
Table 'TallyNumbers'. Scan count 1, logical reads 3225, physical reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0.
SQL Server Execution Times:
CPU time = 6017 ms, elapsed time = 5995 ms.
My last suggestion gave these results (for @Startrow = 5000)
(50 row(s) affected)
Table 'TallyNumbers'. Scan count 4, logical reads 3260, physical reads 0.
SQL Server Execution Times:
CPU time = 235 ms, elapsed time = 122 ms.
Which is only a minimal amount of more reads.
The suggestion increased the number of reads by 0.1% and decreased time by 98.0%.
N 56°04'39.16"
E 12°55'05.25"