• Hi Lynn,

    Firstly, thank you so much for spending the time helping out!

    The rewritten queries perform approx the same as my original ones. As soon as the subquery is present to get the filter values from the second table, it takes considerably longer to execute compared to the other method. Something else that I have to consider is that this forms part of a JIT loading implementation, so I can't just use top1000 to get the results. I need to be able to specify arbitrary start/end rows, hence my usage of the rownumber() call to produce a unique index from a sort column that's passed into the proc.

    I've been working on this for about a week now and it's starting to look like there's no real viable solution that will work within both the software architecture and database structure that I'm unfortunately stuck with for the time being. Something that *does* work is to use 2 CTEs, with the first one being used to perform a gross paring of the starting set, so that second CTE and remaining select query don't have such a volume of data to deal with. It's not the preferred solution, but it's the one I'm going to have to run with until I have the ability to restructure the woeful database structure I'm presented with on this on!

    For the sake of completeness, I'll post the queries I end up with when this is finalised.