Analyzing Memory Requirements for SQL Server

  • Arindam Banerjee

    Say Hey Kid

    Points: 705

    Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/aBanerjee/analyzingmemoryrequirementsforsqlserver.asp

  • Vladan

    SSC-Insane

    Points: 21958

    I am a bit surprised by some things in this article... for example:

    "Generally, the rule of thumb is to have as much as RAM as your data file is."

    Is it really so? Isn't there some limit from where this rule does not apply? Our data file has over 200 GB, but I have never seen a server with that much memory. Obviously, it would be terribly expensive... I don't think databases around this size are rare, so what is your opinion about their memory requirements? We are running this DB on a box with 8GB RAM; I suppose it would work better with more memory, but the performance (with 20-40 concurrent users) is quite good.

  • Ravi Prashanth Lobo-275382

    Right there with Babe

    Points: 792

    Good article! Could learn some new things.

  • Arindam Banerjee

    Say Hey Kid

    Points: 705

    Hi Vladan,

    The general idea is to get the data in the cache whenever the need arises. It certainly does not mean that if I have a TB of data file, then I would have to have that much of RAM, but when I have a database where I am getting frequent performance problem and I want to troubleshoot the issue, then I would certainly look into the rule of thumb. If I have a 8GB data file and 8GB of memory, I would assume that the memory is OK, and jump into the other aspects, like CPU, Network or Disk for troubleshooting.

    On the other side, I have given some perfmon counters which will help you to identify if you really require additional memory. The rule of thumb is just to give you a hint to look for when you do not have adequete time for investigation.

    Hope this clarifies the issue.

    Thanks for taking time to read and give your valuable feedback.

    ~Arindam.

  • Arindam Banerjee

    Say Hey Kid

    Points: 705

    Hi Ravi,

    Thanks for your time. Glad to know that it increased your knowledge. Hope to write more on capacity management. Please check those too.

    Thanks,

    ~Arindam.

  • Ravi Prashanth Lobo-275382

    Right there with Babe

    Points: 792

    I have read both your articles. second one is more useful for me. Will be waiting for future articles from you.

    Thank you.

     

     

     

  • sergio furtado-343184

    Valued Member

    Points: 51

    Ravi, thks for the time in writing this. However I would need some more data for the article to really hit the spot to be honest. For instance, how much does the sql server kernel actually take in memory (just the core functionality disregarding any user db). And how much do instances take? I know it's wrong to assume that 2 instances of the with the exact same data and load will take 2x the memory. And would you be able to quantify how much improvement (if any) can be expected in running sql server as a background process in an OLTP scenario - i really don't see a correlation here you might be assuming something I am not.

    Totally agree with you on the collecting data for baselines,

    thanks for posting article,

    you're a braver man than me,

    SNMF

  • Ravi Prashanth Lobo-275382

    Right there with Babe

    Points: 792

    >>Ravi, thks for the time in writing this. However I would

    Sergio, I think you are talking to the wrong guy! It should be ...Arindam....

  • Arindam Banerjee

    Say Hey Kid

    Points: 705

    Hi Sergio,

    Thanks for reading this article. The first question which you have is really a tricky one. I may not be correct on commenting in respect to this question because I did not find any dependable resource, but for my server, it takes 80-150mb approx for the kernel to run. However, you can adjust the MIN SERVER MEMORY and MAX SERVER MEMORY and see how much memory allocation will be optimum for you requirement. Once again, this comment is solely based on my observation and may be different than actual.

    For the second question, every connection will take 64bytes of memory. It is not wrong, to assume that 2 instances connected to the same server and database will consume 2x memory! Actually, if you have n instances open for any server, the server will reserver 64n bytes of memory. Now, this brings us to the important consideration for the connection memory utilised and we do have a counter for that too in our favourite perfmon! 🙂

    The answer for the third qustion is to liverage your operating system architecture. In Windows operating system, we can run an application as a foreground process or a background process. If we have an application which is not accessed by foreground, then it will be convenient for the OS and the App to run in background. If you want to run any foreground process on that server, other that SQL apps, the OS will be more efficient on managing the memory. Once again, I am not in the place to comment on the actual benefit it would cause, but can certainly say that it would be apt for the process and you would be getting the benefits in certain scenario as discussed above.

    Hope this clarifies some part of your doubts. Please do write if you have more questions.

    Sincerely,

    ~Arindam (not Ravi) 🙂 

  • Ken Shapley

    SSC-Addicted

    Points: 458

     

    "Generally, the rule of thumb is to have as much as RAM as your data file is."

    Did you come up with this rule yourself? How many databases have you worked with in your career?

  • Arindam Banerjee

    Say Hey Kid

    Points: 705

    Hi Ken,

    I did not come to this conclusion only by myself. It came out with a discussion with one of the MVP (SQL Server) Vinod Kumar on a recent Tech-Ed session. I have explained it in the previous post.

    Thanks,

    ~Arindam. 

  • Ravi Prashanth Lobo-275382

    Right there with Babe

    Points: 792

    Hey, even I have heard about this rule.

    This is NOT a hard and fast rule, what it means is - If you have a RAM more than your DB size then increasing the RAM may not help!

    Do you also want to know how many DB's I have worked on? 😉

  • Marios Philippopoulos

    SSC Guru

    Points: 57030

    Does the "Process\Working Set" counter include AWE memory utilization?

    I don't think it does.

    In my machine it reaches 700,000,000 (700 MB), which is way below the 13.2 GB allocated using AWE.

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • Marios Philippopoulos

    SSC Guru

    Points: 57030

    Ken Shapley (7/17/2006)


    "Generally, the rule of thumb is to have as much as RAM as your data file is."

    Did you come up with this rule yourself? How many databases have you worked with in your career?

    This does make sense. It may not be feasible with today's technologies and current costs of buying new RAM, but will likely be reality in a few years time. The 64-bit platform was an exotic curiosity a few years back, but not any more.

    __________________________________________________________________________________
    SQL Server 2016 Columnstore Index Enhancements - System Views for Disk-Based Tables[/url]
    Persisting SQL Server Index-Usage Statistics with MERGE[/url]
    Turbocharge Your Database Maintenance With Service Broker: Part 2[/url]

  • Scott Whigham

    Grasshopper

    Points: 21

    Vladan (7/17/2006)


    I am a bit surprised by some things in this article... for example:

    "Generally, the rule of thumb is to have as much as RAM as your data file is."

    You are right to be "surprised" (I was too). There are so many things peculiar about that statement:

    • What happens when you have a 500GB database?
    • What about when I have 200 50MB databases on my system?
    • What about databases with multiple data files?
    • How does FILESTREAM affect this recommendation?

    I understand trying to simplify the planning process and maybe that's what he was going for? You certainly can't hurt having that much RAM but I've never seen that as a recommendation from anyone before.

    What we care about is maintaining enough RAM to cover connections, plans, and the buffer cache (generally speaking). Saying that, "the rule of thumb is to have as much as RAM as your data file is" is suggesting that SQL Server will load your entire database in memory and serve it from RAM - which it won't (at least not without you querying/using the actual data).

    RAM recommendations need to be done per app+db - OLAP, OLTP, combo. IMO it's hard to guesstimate what an unknown database's memory requirements will be. If you don't tell me how big it is, what it is used for, how it is loaded, how many users there are at launch vs. one year later, etc - I don't know anyone who can accurately provide that info.

    I'll give you an example - I asked earlier, "What happens when you have a 500GB database?" Can you estimate the memory requirements for that database? I certainly can't. There are massively different needs depending on how it is used, etc. Maybe you do need 500GB of RAM - I don't have a clue.

    ========================================================

    I have about 1,000 video tutorials on SQL Server 2008, 2005, and 2000 over at http://www.learnitfirst.com/Database-Professionals.aspx

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply