• Grant Fritchey (7/8/2014)


    So you're going to an underpowered machine (8gb is half the memory my laptop has) and you're running a very large aggregation query and returning 1.7 million rows. The machine is expected to both process the query and then process and display the result set, but you're unsure why that's causing the system to slow down? I'm pretty sure I can tell you why. Think about it, the result set isn't just stored once. It's being processed by the server instance and then it's being processed by the client, both on the same machine. Plus, I'll be you don't have max memory set inside of SQL Server. So the server instance is chewing up as much of the memory resources as it possibly can to answer your query. Then, the OS has to try to either reclaim a bunch of that memory form SQL Server in order to store it within the client application of SSMS (which isn't using the same memory space as your SQL Server instance, they are two completely separate executables) or, it's swapping out to disk like mad in order to consume all that data.

    The server spec isn't bad and according to this the max_mem is set to 200Gb:

    Thank you for the response. I am sorry for not giving out the SQL box specifications. It has 256 GB RAM, of which 200 GB is dedicated to SQL Server instance. Hard drives are 1 TB each with a total of 5 drives. Data files, log files and tempDB have dedicated drives(D, E and F). Hope this information helps.

    I agree with Grant here, the overhead of spawning a desktop session and everything that comes with it is most likely the main cause, although this are respectable specs for a server. The desktop session is just putting pressure in the wrong place if you like. My suggestion is to try both remote SSMS and local (RDP) SSMS with the settings of discarding the result set (query options->Results->Discard Results after execution. If there is less of a difference between the two, this is definitely the culprit.

    😎