Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12345»»»

SQL Server 2005 Paging – The Holy Grail Expand / Collapse
Author
Message
Posted Tuesday, March 10, 2009 10:33 PM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Tuesday, April 1, 2014 11:20 AM
Points: 263, Visits: 275
Comments posted to this topic are about the item SQL Server 2005 Paging – The Holy Grail

SQL guy and Houston Magician
Post #672980
Posted Wednesday, March 11, 2009 12:41 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Tuesday, July 15, 2014 1:10 AM
Points: 401, Visits: 546
This is an interesting article, which left me scratching my (SQL 2000) head over the syntax. Once I had that straight and looked up ROW_NUMBER in the Books Online I saw almost exactly this example. Notwithstanding, I think this is a good article which showcases a solution.

Well done bringing this functionality into the spotlight, and thanks for the time taken to do so.
Post #673018
Posted Wednesday, March 11, 2009 2:21 AM


SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Wednesday, July 23, 2014 3:35 PM
Points: 2,393, Visits: 3,399
I created a TallyNumber table with only one column (ascending clustered primary key) and 2,000,001 records, ranging from 0 to 2,000,000.
First I tried your suggestion in the article, which gave me the correct records back

set statistics io on
set statistics time on

DECLARE @startRow INT ; SET @startrow = 50
;WITH cols
AS
(
SELECT number,
ROW_NUMBER() OVER(ORDER BY number) AS seq,
ROW_NUMBER() OVER(ORDER BY number DESC) AS totrows
FROM tallynumbers
)
SELECT number, totrows + seq -1 as TotRows
FROM cols
WHERE seq BETWEEN @startRow AND @startRow + 49
ORDER BY seq

set statistics time off
set statistics io off

and gave me these results

(50 row(s) affected)
Table 'TallyNumbers'. Scan count 1, logical reads 3225, physical reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0.

SQL Server Execution Times:
CPU time = 6107 ms, elapsed time = 6059 ms.

Setting totrows to constant 1

set statistics io on
set statistics time on

DECLARE @startRow INT ; SET @startrow = 50
;WITH cols
AS
(
SELECT number,
ROW_NUMBER() OVER(ORDER BY number) AS seq,
1 as totrows
FROM tallynumbers
)
SELECT number, totrows + seq -1 as TotRows
FROM cols
WHERE seq BETWEEN @startRow AND @startRow + 49
ORDER BY seq

set statistics time off
set statistics io off

gave me these results

(50 row(s) affected)
Table 'TallyNumbers'. Scan count 1, logical reads 3, physical reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.

So for a large table you will have to scan all records anyway with your suggestion.

Modifying your code to this

set statistics io on
set statistics time on

DECLARE @startRow INT ; SET @startrow = 50
;WITH cols
AS
(
SELECT number,
ROW_NUMBER() OVER(ORDER BY number) AS seq
FROM tallynumbers
)
SELECT number, (select count(*) from cols) as TotRows
FROM cols
WHERE seq BETWEEN @startRow AND @startRow + 49
ORDER BY seq

set statistics time off
set statistics io off

gave me these results

(50 row(s) affected)
Table 'TallyNumbers'. Scan count 4, logical reads 3250, physical reads 0.

SQL Server Execution Times:
CPU time = 360 ms, elapsed time = 191 ms.

So it's not all about reads. It's about "customer satisfaction". My last suggestion runs in less than 0.2 seconds with 3250 reads, and your suggestion runs in 6.1 seconds and 3225 reads, on my single-column 2,000,001 record table.

The reads only went up by a measly 0.78%, but time went down by a staggering 96.8% !

Doing this all over again, but using @StartRow = 5000 gave almost same time for your solution because all records are to be numbered. Same amount of reads too.

(50 row(s) affected)
Table 'TallyNumbers'. Scan count 1, logical reads 3225, physical reads 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0.

SQL Server Execution Times:
CPU time = 6017 ms, elapsed time = 5995 ms.

My last suggestion gave these results (for @Startrow = 5000)

(50 row(s) affected)
Table 'TallyNumbers'. Scan count 4, logical reads 3260, physical reads 0.

SQL Server Execution Times:
CPU time = 235 ms, elapsed time = 122 ms.

Which is only a minimal amount of more reads.

The suggestion increased the number of reads by 0.1% and decreased time by 98.0%.



N 56°04'39.16"
E 12°55'05.25"
Post #673044
Posted Wednesday, March 11, 2009 3:25 AM


SSCrazy

SSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazySSCrazy

Group: General Forum Members
Last Login: Wednesday, July 23, 2014 3:35 PM
Points: 2,393, Visits: 3,399
My best attempt this far is to have a trigger on tallynumbers table and report the number of records in a CountingTable.

That gave me a total of only 14 reads and 4 ms in runtime! And this is for @StartRow = 5000

set statistics io on
set statistics time on

DECLARE @startRow INT ; SET @startrow = 5000
declare @items int
select @items = number From CountingTable where table_name = 'tallynumbers'

SELECT Number,
@items
FROM (
SELECT TOP(50)
Number
FROM (
SELECT TOP(@startrow + 49)
Number
FROM TallyNumbers
ORDER BY Number
) AS d
ORDER BY Number DESC
) AS q
ORDER BY Number

set statistics time off
set statistics io off

Results were

(1 row(s) affected)
Table 'CountingTable'. Scan count 1, logical reads 1, physical reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 0 ms.

(50 row(s) affected)
Table 'TallyNumbers'. Scan count 1, logical reads 13, physical reads 0.

SQL Server Execution Times:
CPU time = 0 ms, elapsed time = 4 ms.

4 ms compared to 5995 ms (decrease by 99.9%) and 14 reads compared to 3225 is a decrease by 99.6%.

What can this tell us?

1. Have number of total records stored in some management table and maintained by a trigger
2. If possible, have the "numbering" work done by possible same trigger and have a "sequence" column in the source table if paging is done a lot of times. Or use the "multiple order bys" principle.



N 56°04'39.16"
E 12°55'05.25"
Post #673074
Posted Wednesday, March 11, 2009 4:11 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Saturday, February 2, 2013 8:21 AM
Points: 283, Visits: 268
About the whole row count issue. I know this isn't bullet proof but SQL is already tracking the row counts in every table. You don't need to create any Triggers and keep your own "tally" tables. Now, of course if you're paging a View, you'll still have to get the row counts using one of your methodologies, however, you can use the following query for tables:

DECLARE @Object SysName
SET @Object = 'dbo.MyTable' -- Schema.ObjectName

-- Get RowCount for Object
-- NOTE: Sometimes there are multiple rows returned when a Table has many indexes,
-- however, each index contains the same value for the row count
SELECT TOP 1 P.Rows
FROM sys.partitions P
INNER JOIN sys.indexes I ON (P.object_id = I.object_id) AND (P.index_id = I.index_id)
INNER JOIN sys.objects O ON (P.object_id = O.object_id)
WHERE (
-- O.type IN('S', 'U')
-- AND
(I.type IN(0,1))
)
AND (
O.name = PARSENAME(@Object, 1)
AND O.[schema_id] = IsNull(SCHEMA_ID(PARSENAME(@Object, 2)), O.[schema_id])
)
ORDER BY O.name

You can also do the same in SQL 2000 by querying the dbo.sysindexes.



Post #673080
Posted Wednesday, March 11, 2009 4:21 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Friday, May 7, 2010 9:36 AM
Points: 5, Visits: 12
Based on my experience with a big table, the best approach is to get a Total Rows number in a separate call:

set statistics io on
set statistics time on

DECLARE @TotRows AS INT
DECLARE @startRow INT ; SET @startrow = 50
;WITH cols
AS
(
SELECT table_name, column_name,
ROW_NUMBER() OVER(ORDER BY table_name, column_name) AS seq
FROM [INFORMATION_SCHEMA].columns
)
--select * from cols
SELECT table_name, column_name
FROM cols
WHERE seq BETWEEN @startRow AND @startRow + 49
ORDER BY seq

select @TotRows=count(*)
FROM [INFORMATION_SCHEMA].columns

--SELECT @TotRows

set statistics time off
set statistics io off

Post #673084
Posted Wednesday, March 11, 2009 5:31 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Friday, July 25, 2014 3:19 AM
Points: 4,320, Visits: 6,113
Nice article.

I have been doing similar coding for a few years now with great success. My typical situation gets just the primary key fields from many tables necessary to filter/retrieve the results, then joining those keys back to their original tables to get the required output.



Best,

Kevin G. Boles
SQL Server Consultant
SQL MVP 2007-2012
TheSQLGuru at GMail
Post #673106
Posted Wednesday, March 11, 2009 6:00 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Tuesday, January 3, 2012 9:35 AM
Points: 46, Visits: 58
I published an FAQ some years ago which granted was a bit immature at the time but I still use the principal today in a more devloped and mature way. It could certainly be refined to make use of some of the advances in SQL server etc.

FAQ

The approach basically uses a mix of the database and DAL / BLL layers to process the paging set.

In the first instance it hits the database to return list of all the primary keys for the result set. This is a prety small and efficient dataset if you use int as your primary key and a hash table or similar.

The DAL/BLL layer then processes this pulling out the total number of records and the current page we want. It then pulls the required page records using the primary keys which is very efficient.

To make this even more efficient I cache the initial list of primary keys so when the user requests the next page, etc all I have to do is look for the relevant primary keys in the cached list and pull the records I want based on primary key. This means the orginal complex query is only fired once regardless of the number of pages and any future requests on the same dataset only pull records based on known primary keys.



Dan Bayley
affordable website design UK
Free Google and Yahoo sitemap generator
Post #673138
Posted Wednesday, March 11, 2009 6:15 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Thursday, October 18, 2012 1:47 AM
Points: 16, Visits: 72
nice solution. and very nicely written too. looking forward to read more from you.
Post #673152
Posted Wednesday, March 11, 2009 6:31 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, July 14, 2014 5:10 AM
Points: 2, Visits: 81
well... we use the same approach but i do not fully agree with it. you say you execute the first complex query only once and then page on it. what if there appears new data in the DB? i guess the set would be stale. i guess this should be coupled with some data retention policy to make the set of ids expire after a certain time. the second problem is that retrieving the first pages is very cheap in term of cost. the price gets maximum when you reach the last page. retrieving the last page with a query is almost as expensive as returning the whole set iof ids because the server must skip the first n records. the majorty of users don't even bother to go beyond the first few pages and that means that the complex query that you executed is mainly unused. in our case one can return tens of thousands of ids or even milions but the user is mostly interested by the first page mostly. so when you have large sets of data getting all the ids is useless mostly.

another thing to note is that retrieving the count of records of a very complex query costs as much as getting the list of ids. so if the user is not interested to know the exact count (which could be 100 or 10 millions) one could retrieve the first thousand of ids (depending on the needs) ... page through them and say the query resulted in more than 1000 hits. the user is never interested by more than a few tens or hundreds of records anyway. this could also be coupled with a retention policy for further improvements. i believe this is the best solution when one could potentially return a large number of records (more than 5000 let's say).

i have another amendament. let's say the user.. in a very special case wants to go beyond the 50-100 pages that you retrieved. you could tell him that it's query retieved more than 5000 records and let him naviagte freely to the next page beyond the first 50-100 pages, retrieving the records from the DB. this adds some complexity to the BAL but it could work.
Post #673168
« Prev Topic | Next Topic »

Add to briefcase 12345»»»

Permissions Expand / Collapse