• Well what's the major issue. You're asking about very low level behavior. It's not immediately applicable to most tuning situations. I mean, there is a specific area called workspace memory used for temporary allocation of objects within a query that is pulled from the buffer pool. How is that useful to you if you have a query creating a work table? Fix the query, add the index, modify the structure... Do what's necessary to get by. Does it really matter exactly where the allocation goes?

    If you need to look at allocations, start with sys.dm_os_memory_clerks. You can see what is being allocated to the different memory management allocations. You can look at the buffer pool through sys.dm_os_buffer_descriptors. These can help you see where the allocation is going.

    Memory allocation for a query is shown in the actual execution plan. You can also look at sys.dm_exec_query_memory_grants for execution queries. To see specifically where information is going, take a look at sys.dm_db_session_space_usage. That's going to show you where the memory allocations are going.

    As to precisely when the memory/disk spill occurs, I'm not sure. It's largely a question of how much memory do you have. Once there's not enough, everything goes to disk. You can kind of figure out if you're using disk or not by looking at the allocations available through the DMOs I listed above.

    But I'm still unclear where this gets you, except a more complete understanding of why a hash, sort, or work table actually does cause things to slow down.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning