[Discussion] Database performance and server capacity

  • Dear DBA fellows:

    In my organization we have 100+ databases host on 20+ servers. Some server have one database, some have more. As DBAs, we have often been asked how many more databases we can put on to one SQL server without affecting performance of databases already on this server? Or, sometimes we are asked: if this SQL server host more databases, how can we ensure the performance of databases already on this server will not be affected? Also, we get blamed the database performance become bad because an new database is moved to the same server.

    I have an answer to myself: if extra databases/load been put on to a SQL Server, the existing database performance will certainly more or less be affected. But this should not stopping us to use the server for more databases. Each database should have a performance threshold. As long as the performance is still in acceptable level and the server still have spare capacity (by measuring CPU, memory, disk and network counters), there should not be a problem to load more database onto a server.

    This is my thought. How will you answer the questions?

    I would like to hear your opinion or ideas! Thanks for sharing!:-)

  • I usually answer that ultimately it's a guess, and we might need to experiment because it's hard to measure workload. I can get avg CPU needs, space needs, memory footprint, but I must guess to what extent I can share resources across two workloads.

    Space is easy, but I try to compare avg CPU and target memory for instances if the workload hasn't fully utilized the system. Then I guess, but I hedge by having a plan to "un-consolidate" if we have issues.

    There's a reason we do this in stages because it's hard to tell how many applications can be supported on each server, at a level acceptable to users.

  • Well, my experiences involved more with memory issues than anything - everything runs smooth and you deploy a new application and we will see memory errors and then it will probably have to do with buffer memory or non buffer memory - I noticed third party applications coming in through uses .dll files and if there is no nonbuffer memory it will fail (and you ask the app team to give you the command they are running and when you run in ssms, it turns out fine - no issues (because you are running it directly)) or the new application issues a begin command and never commits which in turn fills up the tempdb, log file which in turn fills up the disk causing issues to all other databases on that instance 😉 and it goes on.. ..

    It's still a challenge day to day for me, we need to understand the behavior of new application before just deploying the database..

  • Steve Jones - SSC Editor (9/22/2014)


    I usually answer that ultimately it's a guess, and we might need to experiment because it's hard to measure workload. I can get avg CPU needs, space needs, memory footprint, but I must guess to what extent I can share resources across two workloads.

    Space is easy, but I try to compare avg CPU and target memory for instances if the workload hasn't fully utilized the system. Then I guess, but I hedge by having a plan to "un-consolidate" if we have issues.

    There's a reason we do this in stages because it's hard to tell how many applications can be supported on each server, at a level acceptable to users.

    Thanks, Steve.

    But do you agree "if extra databases/load been put on to a SQL Server, the existing database performance will certainly more or less be affected."?

    Will TempDB become a bottleneck in these situations? How do you monitor it? How do you know the limits of your TempDB?

    Thanks!

  • Blue Mancunian (9/22/2014)


    Thanks, Steve.

    But do you agree "if extra databases/load been put on to a SQL Server, the existing database performance will certainly more or less be affected."?

    Will TempDB become a bottleneck in these situations? How do you monitor it? How do you know the limits of your TempDB?

    Thanks!

    Any change can affect the instance performance. A growth in workload on one app/database can affect others. Depending on how close you are to the limits of performance for your system.

    Tempdb is a bottleneck when its usage exceeds its capacity. It isn't necessarily related to adding another app. Changing code or queries, or a user running new ad hoc queries from Excel or Access can cause issues in tempdb. Tempdb is the place where all sorts of temporary system work occurs.

    You can monitor latency and waits associated with tempdb, but really I'd worry more about how well my queries were performing. It's those queries, which ask for too much data, are inefficiently written, or poorly indexed that cause spills to tempdb (usually) and then overload the instance.

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply