• Vladan (7/17/2006)


    I am a bit surprised by some things in this article... for example:

    "Generally, the rule of thumb is to have as much as RAM as your data file is."

    You are right to be "surprised" (I was too). There are so many things peculiar about that statement:

    • What happens when you have a 500GB database?
    • What about when I have 200 50MB databases on my system?
    • What about databases with multiple data files?
    • How does FILESTREAM affect this recommendation?

    I understand trying to simplify the planning process and maybe that's what he was going for? You certainly can't hurt having that much RAM but I've never seen that as a recommendation from anyone before.

    What we care about is maintaining enough RAM to cover connections, plans, and the buffer cache (generally speaking). Saying that, "the rule of thumb is to have as much as RAM as your data file is" is suggesting that SQL Server will load your entire database in memory and serve it from RAM - which it won't (at least not without you querying/using the actual data).

    RAM recommendations need to be done per app+db - OLAP, OLTP, combo. IMO it's hard to guesstimate what an unknown database's memory requirements will be. If you don't tell me how big it is, what it is used for, how it is loaded, how many users there are at launch vs. one year later, etc - I don't know anyone who can accurately provide that info.

    I'll give you an example - I asked earlier, "What happens when you have a 500GB database?" Can you estimate the memory requirements for that database? I certainly can't. There are massively different needs depending on how it is used, etc. Maybe you do need 500GB of RAM - I don't have a clue.

    ========================================================

    I have about 1,000 video tutorials on SQL Server 2008, 2005, and 2000 over at http://www.learnitfirst.com/Database-Professionals.aspx