• Hi Yelena,

    That is a very good approach to proactively identify the risk area, but if it is a disk, then we do not need much testing on that before an upgrade. I would certainly ask for a lab test machine from the vendor if i was buying CPU or memory, bind it to a cluster, failover to that machine for couple of business days and check the profiler logs. For production box disk, I would not let anyone do ad hoc testing on that.

    In case of backup plan, it depends upon how much data I am accumulating for a full business day. Depending upon the volume of data and the maintainance plan, I would decide upon the extra space that I might need. On a normal scenario, if I keep the default 15min continuous backup plan, I do not need much space to hold the data as logshipping is set up.

    We have to remember that to decide a backup plan, we need to check the network bandwidth and business requirement also.

    This article talks about general disk capacity management and some of the formulae given to identify the database growth. It is upon individual requirement to decide how much buffer they want to keep for safety.

     Nevertheless, a very good thought from you. This should also be taken care of in case of planning disk capacity. Thanks

    ~Arindam.