I implemented a solution with the same goals in mind in the past although using a slightly different method. The issue that this method was addressing is not to re-run the query every time.
I do not have to code available, but I will describe how it works. Instead of using a cursor, the full select for the query is used to populate a scratch (not temp) table that is created on the fly. You can either do this with Select...Into syntax or creating a table and then populating it.
Select into is nice because you don't have to know the schema ahead of time, though the downside is the potential locking of system schema tables.
In the creation of the table, you add an identity field and make it the primary key.
An additional component to this method is a master catalog table say Reports, that knows about each of the 'snapshot' tables that is populated and has an expiration date associated with each. The key on catalog table is an id combined with a hash of the where clause and order by.
The stored procedure takes an SQL statement as a parameter (I believe it was as ntext), along with the report id, and the hashcode. It then checks against the master catalog to see if the snapshot exists, and if it has expired.
If it has expired, then the sp will rerun the query to populate the table (The tables are named with guid) and return the table name. If the table exists and has not expired, then it will just return the table name.
Once the table name has been returned, an ad-hoc query can be issued against the table, using the identity field to filter out just the intended records, without repeatedly re-running the query.
One last component of this model, is a background SQL Server job that every 8 hours or so, looks at the master catalog, and drops tables that have not been accessed within the last hour. This is to prevent the db from bloating, as alot of data is constantly pouring into the tables.
The downside, is that the data is a static snapshot at a point in time. But the expiration times can be tweaked for more time-critical data. Also the architecture allows forcing a refresh if necessary. Another downside is that for a large query there is an initial hit as data is populated. We planned to implement a multi-threaded mechanism to handle this, but never got around to it.
The upside, is that once the table is populated successive calls happen almost instantaneously. You can go through millions of recrods instantly via the primary key to get to the results you want for the current page. The data is also stored according to the requested sort order which also speeds data delivery.