• While I haven't done any experiments, I would caution against the thought patterns of discrete possibilities (if enough RAM then only in RAM). For anyone investigating, I would suggest working in a more open fashion:

    What SQL Server feels is "Enough" may vary, and may be 99% or 1% accurate, etc. (and may undershoot or overshoot).

    For each element (temp table, table variable, work table, SNAPSHOT ISOLATION data, etc. etc.):

    Some percentage of the structure (0% to 100%) is stored in RAM.

    Some percentage of the data (0% to 100%) is stored in RAM.

    Some percentage of other elements (0% to 100%) is stored in RAM.

    Some percentage of the structure (0% to 100%) is stored on disk.

    Some percentage of the data (0% to 100%) is stored on disk.

    Some percentage of other elements (0% to 100%) is stored on disk.

    Note that for each of these, the percentages may add up to more than 100% - i.e. all in RAM as well as all in disk.

    This, I suspect, is part of the cause of confusion - reality is likely much more complex than many of the "rules of thumb". Gail clearly showed that at least some of the data is written to disk at least some of the time, which is extremely valuable, and does completely debunk the "All in RAM" idea. That said, it is not sufficient to show how much is written to disk on what types of objects, but it is a good template for doing said testing.

    Many cases may also be different, depending on SQL Server's estimates vs. actuals, bugs, coding artifacts, and so on and so forth.