RAM on the SQL Instance

  • Hi everybody,

    I have a 500 GB highly transactional database on the SQL 2019 Enterprise instance, running on a Windows Server 2019 box with two Intel Xeon 2.20 GHz processors. All the other databases on the instance are tiny and do not take resources.

    This is a SQL Server box; nothing else is running on it.

    How much RAM do I need to allocate on the Server? I do not need the exact number; just an approximate range that would make sense to you.

    Your advice on this question will be greatly appreciated.

    Thank you.

    Alex

     

  • There is no direct connection between DB size and amount of memory required. It depends only on the workload. If it mainly writes, reads very little, the reads are small, and the queries are efficient, you could perform optimally with 16 gb.

    I do try to ensure there is at least 8 Gb of RAM per core in a box, preferably 16. There were/are some oddities with how soft numa manages memory that can reduce your memory grants below what you would expect if have too little memory per core and enough cores for soft numa to divide it up into small pools.

     

  • Another thing to add is that each SQL instance needs SOME RAM. If you have your max memory for those set too low, you can run into issues.

    I do agree with CreateIndexNonclustered that the size of the RAM depends on the workload though.

    What I would start with is looking at the max memory for each of your SQL instances and then use that to determine how much you are currently using. If these are set to the default (2PB if I remember right), you are going to have problems. Make 100% sure that you have that set to a good number and that you are not using all of the RAM.

    If you have things set at default, I would start by looking at each instance for how much memory they are using and watch the value over a period of time to get a baseline. Once you have the baseline, add some extra for growth and use that number as your max memory on the instance. Once you have good max memory numbers, you can calculate out how much you need and ask for that, plus some overhead.

    For overhead, I would add 4 GB for the OS and 2-4GB per possible simultaneous user session. What I mean by possible user session is if you ONLY ever have a single DBA logging into the system, then I'd add 2-4GB for user overhead on top of the OS overhead. If you have 10 DBA's at your company and there are scenarios where 5 of those DBA's log into the server at a time, I would add 10-20 GB for those user sessions.

    You would also want to add extra for anything else running on the system. Antivirus, SSIS, SSRS, etc. All of those need some memory and you don't want any of them crashing or failing due to out of memory errors.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • Hi guys,

    With respect to both replies, I had a question I needed a straight answer.

    To clarify, I have a Windows 2019 Server with two 2.20 GHz processors and eight cores, running only one SQL Server 2019 Enterprise Instance with only one sufficient user highly transactional database. This database has about 400GB of used space and millions of reads and writes per transaction.

    This is a SQL box; no other programs are running on it.

    What would be the recommendation for the RAM on the Server?

    Again, I do not need the exact number, just an approximate range that would make sense to you.

    Thank you for all your help and cooperation. This is really appreciated.

    Alex

     

  • There is no "straight answer" to it as it highly depends on your usage. If the system is currently running, look at the max memory value and look how often it is sitting at full memory waiting for memory to be released to run a query. If it is not hitting the current max memory value, then you don't need "more". But asking for a straight answer is like asking "how much money do I need to buy a house?". There are a ton of factors so nobody can give you a straight answer.

    There are tons of factors that can impact the recommendation for how much memory to ask for. One such factor is in-memory tables. Do you have any? Do you have plans to use them?

    Plus, we have no idea how "busy" your highly transactional database is. Or how much overhead you want for growth. Is that 400 GB database going to double in size per year or is it going to remain pretty stable at 400 GB. Plus is that 400 GB the data file size or the data size? If it is the data file size, how much data is in there?

    My recommendation is to get as much as you can within reason. If you want a hard number, I would say 804GB. It is a semi-made up number. My number allows you to have the entire database in memory twice (allows you to do a ton of joins to duplicate the data in memory) plus room for the OS. Is this reasonable, probably not as 804 is a very strange number and 1024GB would make things nicer especially when trying to buy the RAM (unless this is a VM in which case you can allocate whatever you feel like). My opinion though - 400GB is PROBABLY overkill for the database, but without being on site at your workplace and doing analytics on how much memory the system is using and then estimating how much to add for growth, I'm going to give you an  unrealistically large number.

    If you want a realistic number, I'd talk to your on-site DBA. If you don't have one, I'd hire one. The size of the database has little correlation to how much memory the SQL instance needs. I had a 2 TB database that had a mixed load of read and write that got by pretty happily with 25 GB of ram allocated to it without much end user complaint. The reason being that the selects were on a small table (under 1 million rows and was just a 3 column table of INT, varchar(31), and varchar(255)) that was well indexed and the inserts were more for historical purposes - nobody looked at that data. The table data was selected from was indexed nicely and the data set that would come back was nicely filtered so only a few rows (I think 6 was the most), so not much need for a ton of RAM on that instance. I mean, it MAY have benefited from having more, but we could run that for a while without having SQL hit the 25 GB max memory we set up on there.  We did analysis on the database and watched memory usage to get a baseline and then set the max memory to meet the usage pattern of the database. The only time we really saw spikes in memory usage was during index maintenance and that wasn't that big of a deal if they were slower as we had downtime. So after careful analysis and estimated room for growth, we set it to 8 GB and tried the software out in the test system to make sure there were no major performance impacts. When things seemed good, we replicated on live and watched things for a bit until we were comfortable with it and left the monitoring tools to watch it instead of proactive monitoring. That system is still running today, but we have trimmed some of the fat, but still 25 GB max memory and no complaints. Another system with a much smaller database (400 GB) has a TON of read/write activity across a lot of tables and is a 3rd party tool so no hope of doing much tuning if we want to keep support. That beast is running at 100GB and it would happily eat up more if we had it.

    If you want someone who has no visibility to your system to give you advice on how to set it up, they are going to give you large numbers. I find working with my IT department, it is  much easier to have too much and scale back if it is unused than to not have enough and ask for more. There are hardware limits on max memory. I've never heard of "min memory" limits on hardware.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • @aer:

    SQL Server has resource hungry processes, you add RAM it will take it... and wait for more.

    Every environment varies as suggested by others there is no straight answer.

    Are you experiencing any issues? How much RAM do you have can you elaborate?

    =======================================================================

  • AER wrote:

    Hi guys, With respect to both replies, I had a question I needed a straight answer.

    To clarify, I have a Windows 2019 Server with two 2.20 GHz processors and eight cores, running only one SQL Server 2019 Enterprise Instance with only one sufficient user highly transactional database. This database has about 400GB of used space and millions of reads and writes per transaction.

    This is a SQL box; no other programs are running on it.

    What would be the recommendation for the RAM on the Server? Again, I do not need the exact number, just an approximate range that would make sense to you.

    Thank you for all your help and cooperation. This is really appreciated. Alex

    as mentioned by others it depends - but lets look at those 400 GB - is this size used by all queries? e.g. do all rows in all tables get processed daily, multiple times per day by all the different processes?

    or do you have a few "historical" tables, taking up for example 300 GB out of those 400 GB, which are only used once a month? or even daily, but only on a nightly batch when no one else is using the system?

    with regards to your "queries" - are they all single row access type, or do your queries normally require some range scans, and do operations on block of rows (eg. inserts/updates/deletes)?

    lots of things should be taken on your decision - most important of all is how your system behaves, how often data gets ejected from the cache and has to be loaded from disk to satisfy the queries and so on - for that you need to monitor your system and use some of the available DMVs and performance counters to determine.

    but on a server with 2 processors and a SQL Enterprise license I would likely have 200 GB or more - how much more depends on the above.

  • AER wrote:

    Hi guys, With respect to both replies, I had a question I needed a straight answer.

    To clarify, I have a Windows 2019 Server with two 2.20 GHz processors and eight cores, running only one SQL Server 2019 Enterprise Instance with only one sufficient user highly transactional database. This database has about 400GB of used space and millions of reads and writes per transaction.

    This is a SQL box; no other programs are running on it.

    What would be the recommendation for the RAM on the Server? Again, I do not need the exact number, just an approximate range that would make sense to you.

    Thank you for all your help and cooperation. This is really appreciated. Alex

    You were given the straight answer. None of your provided background information is even remotely useful in determining memory requirements. Even if you did give us some of the necessary metrics, it would still not be enough to provide more than a WAG for memory requirements. Only someone with hands on keyboard for a few days could determine that amount.

    One user and single instance on a box is irrelevant. A single user could consume terabytes or RAM using your 500 Gb DB, or it could be entirely optimal with less memory than is in your laptop.

    I gave you the generalization - a minimum of 8-16 Gb per core as a starting point. After that it is time to be a DBA or sysadmin and do the work to determine what it needs.

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply