It seems incredulous to me that anyone would request a report that contains 40.000 or 70.000 rows to begin with. No one will be reading that. The only use that I can see is that someone wants that much data to dump into a spreadsheet or some-such to do additional analysis.
My recommendation would be to find out what that analysis is and aggregate the report(s) a whole lot more before sending them to the client. Considering the run time of the current reports, some thoughtful performance tuning would seem to be in order.
To summarize, the problem is likely not in the pipeline nor at the client. The problem is in the code that aggregates the data for the report.
--Jeff Moden
Change is inevitable... Change for the better is not.