How much RAM required to server

  • How much RAM required to server having 15-20 Million records, on which insert and update operations are executed.

    How to calculate RAM/Hardware for such scenarios.

  • Vinayak15580 - Tuesday, August 14, 2018 1:40 AM

    How much RAM required to server having 15-20 Million records, on which insert and update operations are executed.

    How to calculate RAM/Hardware for such scenarios.

    RAM is one of the cheapest things you can buy for your server; it's vastly cheaper than high performance SSDs, and CPU (and the latter also comes with Licencing requirements).

    Unfortunately, you've given us next to know details on your usage.15-20 Millions records doesn't tell us a lot. What is a "record"? A row, a column, a single data item within a single row and column? What we really need to know about is the size of your databases(s) (how much storage space they use). A database with with 20M million rows(?) could be tiny compared to one with only 500,000, if the data being stored in the other is small.

    Here's a few questions for you to consider, however, please expand on these (think about what you need the SQ: Server to do):

    • What Version and Edition of SQL Server?
    • How large are your databases?
    • What is the intensity of DML statements?

    • Will those statements be mainly returning large or small datasets?
    • Are the vast majority complex or simple statements?
  • Will you be running any other services on the server? For example
    • SSRS
    • SSIS
    • SSAS
    • Redgate Tools
    • Other Services
  • Have you already determined your CPU specs?
  • Is the server going to be on a VM (where RAM can be more easily changed, based ion the host).
  • As I said, those are just a few questions to answer. You know, better than anyone here, what you need from your SQL Server.

    That said, if you're using SQL Server Standard Edition, then you don't need to go anywhere near as huge. If you're just using the data engine, that's "capped" at 128GBs (It can use a little more than that, due to how things like tables/databases are loaded into RAM in more recent versions), so you don't need to get a silly amount. Thus, I unless you're doing anything else "chunky" on the server, you would never need more than say about 150GBs (which gives the OS more than enough, along side the DBMS using that "little" extra it uses)

    Thom~

    Excuse my typos and sometimes awful grammar. My fingers work faster than my brain does.
    Larnu.uk

  • Thom A - Tuesday, August 14, 2018 2:00 AM

    Vinayak15580 - Tuesday, August 14, 2018 1:40 AM

    How much RAM required to server having 15-20 Million records, on which insert and update operations are executed.

    How to calculate RAM/Hardware for such scenarios.

    RAM is one of the cheapest things you can buy for your server; it's vastly cheaper than high performance SSDs, and CPU (and the latter also comes with Licencing requirements).

    Unfortunately, you've given us next to know details on your usage.15-20 Millions records doesn't tell us a lot. What is a "record"? A row, a column, a single data item within a single row and column? What we really need to know about is the size of your databases(s) (how much storage space they use). A database with with 20M million rows(?) could be tiny compared to one with only 500,000, if the data being stored in the other is small.

    • What Version and Edition of SQL Server?
    • How large are your databases?
    • What is the intensity of DML statements?

    • Will those statements be mainly returning large or small datasets?
    • Are the vast majority complex or simple statements?
  • Will you be running any other services on the server? For example
    • SSRS
    • SSIS
    • SSAS
    • Redgate Tools
    • Other Services
  • Have you already determined your CPU specs?
  • Is the server going to be on a VM (where RAM can be more easily changed, based ion the host).
  • As I said, those are just a few questions to answer. You know, better than anyone here, what you need from your SQL Server.

    That said, if you're using SQL Server Standard Edition, then you don't need to go anywhere near as huge. If you're just using the data engine, that's "capped" at 128GBs (It can use a little more than that, due to how things like tables/databases are loaded into RAM in more recent versions), so you don't need to get a silly amount. Thus, I unless you're doing anything else "chunky" on the server, you would never need more than say about 150GBs (which gives the OS more than enough, along side the DBMS using that "little" extra it uses)

    Thank you sir for your reply.
    Actually I am facing high CPU and Memory utilization issue.
    There is no other services installed or running on same server. The setup is cluster setup.
    The size of the database in more than 300 GB with 15 million records.
    Also It seems that the memory utilization is always shown more than 86% for SQL server service.
    Version is Standard Edition 2016

  • Vinayak15580 - Tuesday, August 14, 2018 3:32 AM

    Thom A - Tuesday, August 14, 2018 2:00 AM

    Vinayak15580 - Tuesday, August 14, 2018 1:40 AM

    How much RAM required to server having 15-20 Million records, on which insert and update operations are executed.

    How to calculate RAM/Hardware for such scenarios.

    RAM is one of the cheapest things you can buy for your server; it's vastly cheaper than high performance SSDs, and CPU (and the latter also comes with Licencing requirements).

    Unfortunately, you've given us next to know details on your usage.15-20 Millions records doesn't tell us a lot. What is a "record"? A row, a column, a single data item within a single row and column? What we really need to know about is the size of your databases(s) (how much storage space they use). A database with with 20M million rows(?) could be tiny compared to one with only 500,000, if the data being stored in the other is small.

    • What Version and Edition of SQL Server?
    • How large are your databases?
    • What is the intensity of DML statements?

    • Will those statements be mainly returning large or small datasets?
    • Are the vast majority complex or simple statements?
  • Will you be running any other services on the server? For example
    • SSRS
    • SSIS
    • SSAS
    • Redgate Tools
    • Other Services
  • Have you already determined your CPU specs?
  • Is the server going to be on a VM (where RAM can be more easily changed, based ion the host).
  • As I said, those are just a few questions to answer. You know, better than anyone here, what you need from your SQL Server.

    That said, if you're using SQL Server Standard Edition, then you don't need to go anywhere near as huge. If you're just using the data engine, that's "capped" at 128GBs (It can use a little more than that, due to how things like tables/databases are loaded into RAM in more recent versions), so you don't need to get a silly amount. Thus, I unless you're doing anything else "chunky" on the server, you would never need more than say about 150GBs (which gives the OS more than enough, along side the DBMS using that "little" extra it uses)

    Thank you sir for your reply.
    Actually I am facing high CPU and Memory utilization issue.
    There is no other services installed or running on same server. The setup is cluster setup.
    The size of the database in more than 300 GB with 15 million records.
    Also It seems that the memory utilization is always shown more than 86% for SQL server service.
    Version is Standard Edition 2016

    High CPU usage could be for other reasons other than lack of memory. Have you checked for missing indexes? these can make a massive difference to CPU usage by stopping table scans. Also, have you identified which SQL is causing the high CPU usage. How much memory do you currently have installed? If you already have a reasonable amount like 32 GB it's worth looking at other performance causes before upgrading.

  • So how much RAM do you currently have and how many CPUs are running at 86%?

    Also, a 300GB database with 15-20 million rows (you still haven't stated what the nature or size of those rows are) probably is NOT going to be fixed by the addition of RAM.  You need to find out what the code is doing because even a great machine can be crippled by things like non-SARGable predicates, bad or inadequate indexing, poorly maintained statistics, datatype mismatches (especially for criteria in Joins and WHERE clauses because they become non-SARGable), connection strings (IF YOU HAVE CONNECTION STRINGS THAT HAVE M.A.R.S. ENABLED, CHANGE THEM ALL TO TURN IT OFF NOW!!!!  MARS DAMNED NEAR KILLED US!!!), bad code in the form of scalar and mTVF functions, cascading functions and views, long recompiles for high-hit ratio code that always recompiles because it's not properly parameterized, and a ton of other junk.

    Remember, "Performance is in the code".  Start there, first.  If you don't know how to do that, I recommend that you find a good SQL monster consultant because these types of problems can be difficult to find but, once found, are usually fairly easy to fix.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Piling on.

    Focus on gathering metrics right now. What are your wait statistics like? Is the server waiting on memory? Which queries are waiting? What is making those queries wait?

    Identify the causes of slow performance first, then start working on addressing those causes. In the majority of cases (around 80%), it's either bad code or missing/incorrect indexing. Yes, it can also be hardware issues and you may need to address that, but so far you have a small database (300gb is nothing these days) and some number of rows (15 million, a lot if we have to put all of them into memory for every query, but the problem there is not the number of rows, but the queries, see). We don't have how many transactions, performing what actions, with what wait statistics, with what kind of performance metrics, and then finally, drilling into what kind of behaviors evidenced in the execution plans.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • I'd say that you first need to tell us how much RAM and how many CPUs you have first.  If you currently have, for example, 128GB of RAM and 16 CPUs, then you can usually look at things like wait stats all day and the only thing it will tell you is that your code is really taking too long and using too many resources.

    So, how much RAM and how many CPUs do you currently have?  Is the system predominately OLTP or BATCH Processing or a mix of both?  You said you wanted to know because of insert and update scenarios... what is the machine actually doing OLTP or BATCH wise?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Well for initial sizing out, we typically deploy some standard based on requirements and what we have seen based on other deployments. We then test and tweak in non production. Predictions don't always go as planned, so always do extensive testing on non prod.

    How did you come to the conclusion that you have a "memory insufficiency"? Did you see this by checking performance monitor and seeing SQL Server as the highest contemporary memory user? If so, you this doesn't necessarily indicate that SQL itself is starved for memory. You probably have the instance "max memory" set to default, which will use as much memory as the OS will allow. I've seen this bring servers to its knees. Is this server in production?
    .
    Can you make sure your max and min memory settings are configured? You typically want to leave 2-4 GB available to OS, depending on what else you may be doing with the box. Is this server dedicated specifically to SQL Server? SQL Server should typically take up most of the physical memory for a dedicated box. 
    https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-2017

    High physical memory usage does not itself indicate that SQL is starved for memory. This is merely an indication that the OS has reserved that memory for SQL. If you are running SQL Server on Windows, you can check for logical memory bottlenecking using various performance monitor metrics. This is a great article that explains everything and how to setup the perfmon trace(sorry if there is a SQL Server Central one lol):

    https://www.sqlshack.com/sql-server-memory-performance-metrics-part-1-memory-pagessec-memory-page-faultssec/

    But really, you should start at the query level.

    1. Do you have a list of inserts, updates, deletes that will frequently be run by the application?
    2. Do you have a list of selects and aggregations that will typically be run by the application?

    If you do, you should get that list and make sure they are properly indexed. Also review table structure and make sure it's most optimal. Also, are tables fragmented, statistics updated? Heaps or clustered indexes? Problems at the DB level can easily inflate memory and CPU requirements.  Also make sure your queries aren't doing anything crazy. I personally hate exclusive locking with subqueries or anything that is difficult to predict performance wise, but sometimes it's unavoidable. Throwing more resources should never be the first knee-jerk reaction to these kinds of issues. I don't want to make this a huge reply but there are alot of readables out there that can help you with optimizing SQL logically. 

    I would certainly start at the query level. You can run a SQL profiler trace with a filter of 2 second duration. Optionally, CPU at 500 but make sure to check "do not include rows with no data" for this one. Your thresholds will vary depending on your baseline. If you run this and do some testing, you can find queries that are taking a while, and tune individually. If it's a production box, be wary that profiler traces can cause performance issues. Extended events is a a much lighter tracing solution but this requires a bit more configuration and know how.

    In conclusion, make sure there are no logical bottlenecks in your DB structures and queries first. Make sure everything is efficient there, THEN check system resources using the SQL Server perfmon metrics.

    I hope this helped.

  • eshults5 - Sunday, August 19, 2018 10:18 AM

    Well for initial sizing out, we typically deploy some standard based on requirements and what we have seen based on other deployments. We then test and tweak in non production. Predictions don't always go as planned, so always do extensive testing on non prod.

    How did you come to the conclusion that you have a "memory insufficiency"? Did you see this by checking performance monitor and seeing SQL Server as the highest contemporary memory user? If so, you this doesn't necessarily indicate that SQL itself is starved for memory. You probably have the instance "max memory" set to default, which will use as much memory as the OS will allow. I've seen this bring servers to its knees. Is this server in production?
    .
    Can you make sure your max and min memory settings are configured? You typically want to leave 2-4 GB available to OS, depending on what else you may be doing with the box. Is this server dedicated specifically to SQL Server? SQL Server should typically take up most of the physical memory for a dedicated box. 
    https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-2017

    High physical memory usage does not itself indicate that SQL is starved for memory. This is merely an indication that the OS has reserved that memory for SQL. If you are running SQL Server on Windows, you can check for logical memory bottlenecking using various performance monitor metrics. This is a great article that explains everything and how to setup the perfmon trace(sorry if there is a SQL Server Central one lol):

    https://www.sqlshack.com/sql-server-memory-performance-metrics-part-1-memory-pagessec-memory-page-faultssec/

    But really, you should start at the query level.

    1. Do you have a list of inserts, updates, deletes that will frequently be run by the application?
    2. Do you have a list of selects and aggregations that will typically be run by the application?

    If you do, you should get that list and make sure they are properly indexed. Also review table structure and make sure it's most optimal. Also, are tables fragmented, statistics updated? Heaps or clustered indexes? Problems at the DB level can easily inflate memory and CPU requirements.  Also make sure your queries aren't doing anything crazy. I personally hate exclusive locking with subqueries or anything that is difficult to predict performance wise, but sometimes it's unavoidable. Throwing more resources should never be the first knee-jerk reaction to these kinds of issues. I don't want to make this a huge reply but there are alot of readables out there that can help you with optimizing SQL logically. 

    I would certainly start at the query level. You can run a SQL profiler trace with a filter of 2 second duration. Optionally, CPU at 500 but make sure to check "do not include rows with no data" for this one. Your thresholds will vary depending on your baseline. If you run this and do some testing, you can find queries that are taking a while, and tune individually. If it's a production box, be wary that profiler traces can cause performance issues. Extended events is a a much lighter tracing solution but this requires a bit more configuration and know how.

    In conclusion, make sure there are no logical bottlenecks in your DB structures and queries first. Make sure everything is efficient there, THEN check system resources using the SQL Server perfmon metrics.

    I hope this helped.

    Heh... the OP won't even tell us what the current amount of RAM nor number of CPUs is.  It also won't be "2 second" queries that are the problem.  It's usually things like queries that don't have reusable execution plans that end up compiling every time they run and then take 2 to 30 seconds to recompile and it'll also be the "Death by a thousand cuts" caused by poor programming on the front end and the bad code that ORMs generate with a ton of things wrong like non-SARGable predicates, illogical WHERE criteria and the propensity to join 30 tables and then use DISTINCT to "eliminate the duplicates". 😀

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Vinayak15580 - Tuesday, August 14, 2018 1:40 AM

    How much RAM required to server having 15-20 Million records, on which insert and update operations are executed.

    How to calculate RAM/Hardware for such scenarios.

    Hi there,

    You say 15-20 million records (15 x 10 ^ 6 records) but you say nothing about the record structure. It's not the same thing if you have 1 record with only 1 field or 1 record with, say, 999 fields. It's also important to know in advance which data types are used (it's everything INTs) ?

    Cheers,

  • Jeff Moden - Sunday, August 19, 2018 1:24 PM

    eshults5 - Sunday, August 19, 2018 10:18 AM

    Well for initial sizing out, we typically deploy some standard based on requirements and what we have seen based on other deployments. We then test and tweak in non production. Predictions don't always go as planned, so always do extensive testing on non prod.

    How did you come to the conclusion that you have a "memory insufficiency"? Did you see this by checking performance monitor and seeing SQL Server as the highest contemporary memory user? If so, you this doesn't necessarily indicate that SQL itself is starved for memory. You probably have the instance "max memory" set to default, which will use as much memory as the OS will allow. I've seen this bring servers to its knees. Is this server in production?
    .
    Can you make sure your max and min memory settings are configured? You typically want to leave 2-4 GB available to OS, depending on what else you may be doing with the box. Is this server dedicated specifically to SQL Server? SQL Server should typically take up most of the physical memory for a dedicated box. 
    https://docs.microsoft.com/en-us/sql/database-engine/configure-windows/server-memory-server-configuration-options?view=sql-server-2017

    High physical memory usage does not itself indicate that SQL is starved for memory. This is merely an indication that the OS has reserved that memory for SQL. If you are running SQL Server on Windows, you can check for logical memory bottlenecking using various performance monitor metrics. This is a great article that explains everything and how to setup the perfmon trace(sorry if there is a SQL Server Central one lol):

    https://www.sqlshack.com/sql-server-memory-performance-metrics-part-1-memory-pagessec-memory-page-faultssec/

    But really, you should start at the query level.

    1. Do you have a list of inserts, updates, deletes that will frequently be run by the application?
    2. Do you have a list of selects and aggregations that will typically be run by the application?

    If you do, you should get that list and make sure they are properly indexed. Also review table structure and make sure it's most optimal. Also, are tables fragmented, statistics updated? Heaps or clustered indexes? Problems at the DB level can easily inflate memory and CPU requirements.  Also make sure your queries aren't doing anything crazy. I personally hate exclusive locking with subqueries or anything that is difficult to predict performance wise, but sometimes it's unavoidable. Throwing more resources should never be the first knee-jerk reaction to these kinds of issues. I don't want to make this a huge reply but there are alot of readables out there that can help you with optimizing SQL logically. 

    I would certainly start at the query level. You can run a SQL profiler trace with a filter of 2 second duration. Optionally, CPU at 500 but make sure to check "do not include rows with no data" for this one. Your thresholds will vary depending on your baseline. If you run this and do some testing, you can find queries that are taking a while, and tune individually. If it's a production box, be wary that profiler traces can cause performance issues. Extended events is a a much lighter tracing solution but this requires a bit more configuration and know how.

    In conclusion, make sure there are no logical bottlenecks in your DB structures and queries first. Make sure everything is efficient there, THEN check system resources using the SQL Server perfmon metrics.

    I hope this helped.

    Heh... the OP won't even tell us what the current amount of RAM nor number of CPUs is.  It also won't be "2 second" queries that are the problem.  It's usually things like queries that don't have reusable execution plans that end up compiling every time they run and then take 2 to 30 seconds to recompile and it'll also be the "Death by a thousand cuts" caused by poor programming on the front end and the bad code that ORMs generate with a ton of things wrong like non-SARGable predicates, illogical WHERE criteria and the propensity to join 30 tables and then use DISTINCT to "eliminate the duplicates". 😀

    You're bringing back terrible memories for me. Who knew parameter sniffing could be life or death for some applications? Lmao. But hey that **** show that is how I learned SQL in the first place!

    And ya..just last week I ran into an issue with a SQL Server CPU spiking to 100% and main offender was due to an implicit conversion, that's apparently been an issue for some time until we were asked to look into it. Yikes.

  • Dear All Thank you for your inputs. Its really helpful to me.
    I have set 8 CPU and 32 GB RAM to my machine. I have done transaction insert and update on table. Major datatype of a column of a table is varbinary and rest columns have varchar and int datatype. total columns are 40(5(including varbinary)+35). 
    And operations are only insert and update in these tables. Indexes are already created on required columns. 
    Clustered index created o integer column.

  • Vinayak15580 - Wednesday, August 22, 2018 4:48 AM

    Dear All Thank you for your inputs. Its really helpful to me.
    I have set 8 CPU and 32 GB RAM to my machine. I have done transaction insert and update on table. Major datatype of a column of a table is varbinary and rest columns have varchar and int datatype. total columns are 40(5(including varbinary)+35). 
    And operations are only insert and update in these tables. Indexes are already created on required columns. 
    Clustered index created o integer column.

    Basically, that means less than 4GB per CPU especially since you need to hold back some of that memory to keep from starving the operating system.  Even with good code, that would seem to be a seriously underpowered machine memory-wise.  Your code needs to be as tight as a drum if you're going to run with that little bit of RAM for the sizes of the data that you've claimed. 

    To get out from your current jam up, my recommendation would be to upgrade the RAM to (at least) 128GB of RAM and hold 16GB of that out for the operating system.  Then, start working on the code that still has problems even after such a hardware upgrade... and, yeah... there's going to be more than you would expect.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Viewing 13 posts - 1 through 12 (of 12 total)

    You must be logged in to reply to this topic. Login to reply