Honestly, if you want commentary on the two links, you might ask Gail directly. She (not he) is better positioned to explain her posts than I am.
I'm not a Microsoft engineer, so I don't have "insider access" to the code that handles disk-spillover. (Neither is Gail, for that matter.) So what I have (and what she has) is data from what we can research online, plus our own experimentation.
That's why I said I'd be interested in seeing your numbers comparing what you get from using a RAM disk vs the same load on a regular (HDD or SSD) mechanism. After all, direct observation is generally superior to reading about something anyway.
I'm giving you the data I've got. That's all I can do in this kind of situation.
Personally, in 12 years as a DBA, I've never yet found that optimization at that level has mattered to me. I've found that 99% or more of performance issues on the database servers I've dealt with, are in the way the data was architected (normalization, et al), and in the way the DAL is built (procs, inline code, et al).
But that's my limited, annecdotal experience. I'd be willing to bet the guys who do EVE Online (last I heard, it runs on SQL Server) are MUCH more concerned about getting every microsecond of performance out of their databases, than I ever will be. They probably have to work at those levels of optimization.
But I'm usually dealing with a few thousand transactions per second, mostly just feeding dynamic content to websites. Not that those don't need tuning, but not at the kind of level you're looking at with RAM disk vs SSD for tempdb.
The kind of performance issues I have to deal with are things like:
When we got a new engine for our websites, it was built by a third party company. In preparing to go live, we found out that the home page was going to take an average of 6 minutes to load, because of really, really, really poor database design. I spent a day re-architecting their database, a couple of days working with the web devs on regression testing to make sure I hadn't broken anything, and got the average load time to a few milliseconds. Query time went from 6 minutes to about 8 milliseconds, for the exact same data. That page gets a few hundred hits per second, from all over the world. 8 milliseconds is good enough for what we need. Could it be 7 instead? Probably, with the right amount of work done at the server level.
We had an ETL process that, besides losing data, was also taking up to 12 hours to run through a few thousand records per day. I inherited this from my predecessor (who left this job to manage databases for a bank - which is a scary thing from my perspective). I fixed it, refactored it, and took the average time for the export process down to about 10 minutes. Again, I'm sure it could be refined down to 8 minutes if I spent the necessary hours to fine-tune the server environment.
Prior employer, had a daily process that I was asked to find out why it was "failing" so often. Day one on the job, I found the "daily" process was taking anywhere up to 70 hours to run. Part of the problem is it was using a VB.NET construct to read XML files line by line, then using string functions to parse the data out of the XML, then staging each value into a Name-Value-Pair construct, then reconstituting the data into properly normalized tables. (Yes, that was "part of the problem". It had other issues on top of that.) I converted that part to a bulk import of the XML files, then XQuery to parse it directly to the normalized tables. Run time went down to something like 20 minutes just from that one refactor. That also got rid of some data-loss issues. Took me about 2 or 3 hours to find the problem and fix it. No server-level work needed at all. Could I have taken it down to 15 minutes instead of 20 by spending a few days optimizing the I/O channels for the text files? Probably.
In all of these cases, and thousands more just like that, I've sped up data processes by simply huge margins, without having to worry about details at the level you're playing with. I'm interested in what you find, because I'm curious about that kind of thing, but I'm dubious about its value except possibly in some extreme edge cases (like, possibly, EVE).
The extra time, probably measured in days or weeks, it would take to fine-tune each server in the manner you're experimenting with, just isn't worth the potential ROI for someone like me. That same time can be spent re-architecting, re-coding, etc., much more profitably.
So, all I can do is give you what I've found on the subject. I think you'll need to experiment with it on your own to really find out what, if any, benefit you get from it. I am interested in what results you get, but mainly out of curiosity, not need.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon