• well... we use the same approach but i do not fully agree with it. you say you execute the first complex query only once and then page on it. what if there appears new data in the DB? i guess the set would be stale. i guess this should be coupled with some data retention policy to make the set of ids expire after a certain time. the second problem is that retrieving the first pages is very cheap in term of cost. the price gets maximum when you reach the last page. retrieving the last page with a query is almost as expensive as returning the whole set iof ids because the server must skip the first n records. the majorty of users don't even bother to go beyond the first few pages and that means that the complex query that you executed is mainly unused. in our case one can return tens of thousands of ids or even milions but the user is mostly interested by the first page mostly. so when you have large sets of data getting all the ids is useless mostly.

    another thing to note is that retrieving the count of records of a very complex query costs as much as getting the list of ids. so if the user is not interested to know the exact count (which could be 100 or 10 millions) one could retrieve the first thousand of ids (depending on the needs) ... page through them and say the query resulted in more than 1000 hits. the user is never interested by more than a few tens or hundreds of records anyway. this could also be coupled with a retention policy for further improvements. i believe this is the best solution when one could potentially return a large number of records (more than 5000 let's say).

    i have another amendament. let's say the user.. in a very special case wants to go beyond the 50-100 pages that you retrieved. you could tell him that it's query retieved more than 5000 records and let him naviagte freely to the next page beyond the first 50-100 pages, retrieving the records from the DB. this adds some complexity to the BAL but it could work.