Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 

Analyzing Disk Capacity for SQL Server

By Arindam Banerjee,

Setting up a new SQL box: Analyzing Disk storage capacity

The important places to look for considering the capacity requirements for a SQL Server varies for different scenarios. This article will look into some of the commonly known areas that affect the server the most when it comes to capacity analysis.

While setting up a new server for SQL, we will be specifically interested in what kind of load it will take and is there enough bandwidth to support that load. From this statement, we conclude that we need to take a look in the server that we are going to build and the network that will be supporting it. Now, a computer has having hard disks, CPU(s) and memory. So we will be looking into these fields. These will be called Internal capacity analysis. On the other hand, we have network bandwidth and other network related issues which we will call External capacity analysis.

  • Internal Capacity Analysis
  • Analyze Disk space requirement
  • Analyze CPU requirement
  • Analyze Memory requirement
  • External Capacity Analysis
  • Analyze Network requirement
  • Analyzing Disk space requirement

In this article, we will look into the disk analysis.

Hard disk plays an important role in a server. If your server does not have adequate hard disk, chances are that your tempdb may be getting full. If it is also linked with a web server (which might be a normal case) then the entire web site may come down. So it is very important to understand how much  disk requirement you really need. To understand the requirement, here are some best practices that you can follow and determine by yourself.

  • Disk space capacity
  • Disk throughput capacity
  • Locations and roles of database servers
  • Data Storage Time
  • Security of data
  • Growth of database

Disk space capacity

This requires an analysis of how much data is SQL server going to accommodate. The data files are crucial, but not the only thing for consideration. The transaction log files, tempdb growth, index spaces, physical file fragmentation spaces etc various things need to be considered. If you already have an existing server, you can check DBCC SQLPERF (LOGSPACE) to see what is the log size (MB) and log space used (%) and can determine how much space is consumed by transaction log.

In short you need to consider whatever objects that consume disk space and plan accordingly. Also, if you have a huge database which is growing rapidly, consider the growth rate also. I will be writing more on how to consider the growth rate at the end of this article. Please check the formulae and determine which one suits best for your project.

Disk throughput capacity

Use system monitor to determine the disk throughput capacity with the PhysicalDisk counters. These show the I/O for the disk.

  • Disk Read Bytes/sec
  • PhysicalDisk
  • Disk Write Bytes/sec

These counters will show whether the disk gets more read or write. If any of these counters show very high and frequent spikes on the perfmon graph, you need to consider on a disk which has faster IO. After determining whether it gets more of read or write, use Avg. Disk Read Queue Length or Avg. Disk Write Queue Length. For optimum performance, these counters should be low.

Locations and roles of database servers

If you are working on a distributed system, chances are that the servers will not perform same role in different locations. For your central office, the load of the database may not be the same as the databases of your remote office. Please note that the space calculation will be dependent on the role which the server is. For example, if you are planning for the central server, then it should be ideally fast, should contain adequate space and have enough bandwidth allocated to serve the remote server. On the other hand, if you are planning for a backup server, the server might not be so fast, should have enough space to store historical data and might have limited bandwidth. Remember, the capacity planning of the servers greatly depend on the business need and the service level agreement set for the project.

Data storage time

It is, simply, how much time your company wants to store the historical data. Say for, insurance companies or banks may need to preserve the data for a really long time, where as an online store might not be interested to take the data backup for long. Data storage time needs to be identified before estimating a storage solution.

Security of Data

It is a very important aspect for consideration. If your company maintains secure or encrypted data, then you must ensure that the disk space is ample to hold encryption. You may encrypt the columns of a database using any algorithm suitable. Every time you want to fetch the data, SQL Server will decrypt the data and show to you. This means an additional overhead cost on each data item which in turns mean extra disk cost and also significant CPU usage to encrypt and decrypt the data.

Growth of database

Database growth can be of three types:

  1. linear growth (grows at a constant amount over time)
  2. compound growth (grows at a constant rate for a specific period of time)
  3. geometric growth (grows periodic by some increment, which is in turn growing at a constant rate)

Below are the formulae for different type of growths:

Linear: Current disk space + (growth * No of periods)

Compound: Current disk space * (1 + Growth %) ^ No of periods

Geometric: Current growth + [initial increment * {1-Incrementgrowth rate^ (No of period+1)}]/ (1 – growth rate)

To illustrate the fact, let us take one example. Say for, you have a database and it is of 100 GB. If it grows at 10% per year, then after 2 years, it will be 100 + (10 * 2) = 120 GB (linear growth).

If it is so that the database grows at 5% rate/month for 2 years, then it will be

100 * (1 + 0.05) ^ 24 = 322.5 GB (compound growth).

Take another case. Say for you have already a database of 100GB. Now the growth rate is so that it starts at 10 GB/month and the increment itself increments at 5%/month, then in 24 months, it will be 100 + {(10 * (1- (1.05 ^ (24 + 1))) / (1-1.05)} = 100 + 477 = 577 GB (geometric growth).

Conclusion

The above points are some of the best practices that you should be considering when you are setting up a new storage space; however, you might face an entirely new scenario where you have to deduce your own best practice! Nevertheless, be prepared to look for the business requirement and ask a lot of questions to the business before you make your decision on any storage space capacity planning.

Total article views: 15391 | Views in the last 30 days: 17
 
Related Articles
BLOG

SQLServerCentral Article on Database Space Capacity Planning

I wrote an article for SQLServerCentral entitled "Database Space Capacity Planning" that demonstrate...

FORUM

Database growth Monitoring on diff servers

Database growth Monitoring on different SQL Servers

FORUM

Server Capacity Planning

Server Capacity Planning

ARTICLE

Database Space Capacity Planning

Describes a process to create a consolidated space forecasting report, which focuses on a "days rema...

FORUM

SQL Server Capacity Planning

About SQL Server Capacity Planning

Tags
 
Contribute

Join the most active online SQL Server Community

SQL knowledge, delivered daily, free:

Email address:  

You make SSC a better place

As a member of SQLServerCentral, you get free access to loads of fresh content: thousands of articles and SQL scripts, a library of free eBooks, a weekly database news roundup, a great Q & A platform… And it’s our huge, buzzing community of SQL Server Professionals that makes it such a success.

Join us!

Steve Jones
Editor, SQLServerCentral.com

Already a member? Jump in:

Email address:   Password:   Remember me: Forgotten your password?
Steve Jones