• hhcosmin, I agree with what you say the caching policy needs to be defined and suitable. It can be dynamic these days and linked to the data but a lot of the time for what I do I can cache a set of ID’s for the duration so it becomes very cheap.

    One thing that frustrates me about some sites is where the data does change and paging just becomes messy and isn’t always the best way to present it. For example page 1 becomes page 2 because the dataset has changed and so when I click next page I get half of what I was just looking at on page 1 and the first half of what was page 2 etc. Or you go back page and what was page 2 isn’t page 2 anymore.

    I’m not quite sure I understand how the last page becomes very expensive using this approach as once the initial query has been performed anything else is PK based. Unless you are talking about having to read from the beginning of say 1000000 PK’s to the end as appose to the current page? Which is true, that would be a nightmare scenario!

    So what you say about very large datasets is true and I would typically limit the results set to 1000 or at the extreme 5000 records using the SQL command. If the user wants to go beyond this it does make it complicated but I can’t remember the last time I went beyond page three of any search.

    Typically in my experience if a user gets more than a handful of pages they are likely to want to refine their search or maybe sort it to get the records they want in the top pages. Enticing a user down this road is pretty easy….. “you got over 1000 records … try refining your search below!” then give them a bunch of useful refinements.

    Obviously every solution has its pros and cons and I wouldn’t present any one as the holy grail as every application has its own requirements.