How many databases is too many

  • I have an SQL instance we call a "shared" instance. We use it to host a variety of projects around the corporation. Typically, we look at the sizing and connection requirements of the project to decide if it should be hosted in the shared instance or whether it should have it's own dedicated instance.

    The shared instance is growing quickly with a bunch of unrelated databases. Without running a series of tests, is there a good rule of thumb I can use to decide when an instance should be consider full? What would be nice is a rough number of databases or connections or total disk usage that I could use as a discriminator. Or, what tests should be run to determine when the instance is full?

    Thanks

    JC

  • drive space usage is obviously a limiting factor. Cannot give a percentage full rule of thumb because that depends on the size of the disk. Sever is full whenever any one disk can only just support current space requirements.

    As for load on the server run perfmons for an extended period (I run for 24 hours) to judge current load an how much overhead you have. Monitor the usual suspects such as CPU,I/O and memory.

    some starters for 10

    Available Memory\mbytes

    Page Faults\sec

    Pages Input\sec

    Avg Disk read q length

    avg Disk write q length

    avg disk sec\read

    avg disk sec\write

    avg disk reads\read

    avg disk writes\sec

    %Processor Time

    Buffer Cache Hit ratio (avg)

    Page reads\sec

    Page reads\sec

    Page writes\sec

    Page writes\sec (max)

    User Connections

    Lock Wait time\ms

    Total Sql server memory\gbytes

    Batch Requests\sec

    Batch Requests\sec )

    Also worth using sys.dm_io_vitrual_file_stats to see which are the busiest databases i/o wise, because on a shared environment, its going to be i/o that gets you first.

    ---------------------------------------------------------------------

  • Thanks for the input. Performance testing would tell me where I am today and continuing to monitor would eventually tell me when I've hit the wall on a given metric but I am hoping to find out if there's some rule of thumb I can use to be able to draw a line in the sand and say "this instance is full".

    Disk space is really not a problem as more disks can be added. Since all the databases are relatively small (~1GB or less) I'm not too concerned with disk usage. What I am concerned about is how many unrelated databases would be considered a high number. I assume there is a lot more overhead due to the fact there is no relationship between the databases. Also, since each project has it's own connections, what would be considered a high number of connections.

    I realize the best answer was given above (test, monitor, implement) but that's not an option at this time.

    Would say, 400 unrelated databases be considered alot? 500 different connections?

    Thanks

    JC

  • There really isn't a magic formula. Two databases can max out an instance or you could put 200 databases on an instance and still have room for more. It just depends on how well memory, cpu, disk controllers and the tempdb are being shared. Monitoring & baselining are the key.

    Plus, unless every database that you add is identical, there's no way to know that adding Database X isn't going to be the one that breaks the server or not.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 4 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply