• It sounds like you have to be very careful of how you rebuild those large tables - note that index rebuilds can take TempDB space in a range of 0% to 232% of the size of the index, online vs. offline, clustered vs. nonclustered, drop and create vs rebuild, and SORT_IN_TEMPDB options.

    http://msdn.microsoft.com/en-us/library/cc966402.aspx.

    In general, for a data warehouse, I'd be expecting large historical queries looking at weeks, months, or years of data - larger datasets need larger tempdb space, and only 40GB is something I'd consider small for even a 500GB database used for large historical queries... or for any database where any single index is more than 40GB/1.07 or so.

    I agree with the above comments regarding parameter sniffing and stats sensitive queries, though I'll note that unless you can rewrite the queries/application to completely prevent, it's unpredictable when you'll have it - maybe some busy day your stats updates can't keep up, or some customer hits several parameter sniffing cases based on their requirements. You'd still need to have a larger tempdb for them to be able to operate at all.

    I have to take issue with your 10GB tempdb limit. Why 10GB? Why not 8GB or 12GB? Is it 10GB on a 5GB database as well as a 1500GB database? Is it 10GB for OLTP as well as small historical queries as well as large historical queries? There's too much "it depends" in some of these situations for a set rule of thumb like that to sit well with me.

    More importantly, why so stingy with your tempdb? For an OLTP database doing offline index work, that may be perfectly fine, but I would consider a tempdb data size 2.6% of the total data and 11.8% of the largest table's clustered index of it to be very small for data warehouse work.

    P.S. Local SSD's in a RAID config (either SAS or PCIe) can not only easily top your 8Gb FC channels, but in a throughput limited situation, they can do so while allowing FC to carry their full throughput in other traffic, allowing for a still higher aggregate throughput.

    P.P.S. Gail, I don't work with the OP, but I have worked with servers which have more than enough RAM (100GB+) to allocate 35GB to a single query and not bat an eye; they're built primarily for large historical queries, of which we usually have less than two running.