• It is not easy(or even possible, perhaps) to mathematically/statistically ensure the numbers you speak of. While that level of redundancy certainly does offer a high possibility of you being able to achieve that SLA, there is no guarantee that you will. If you try to statistically calculate the probabilities, you'll have to take into account the MTBF's of each individual component in your redundant architecture. Theoretically, it will be very difficult to factor in the MTBF's of individual disk drives, HBA cards/iSCSI interfaces, memory, CPU's, mother boards, network switches and what have you in the complex architecture that is any corporate computer/network system today. Patching, system upgrades, application software upgrades also need to be factored in to this picture(as if hardware wasnt enough :-))

    Instead, the SLA is what you strive to achieve, in being pro-active and manage the redundant hardware/architecture & software by heading off potential failures before they can affect your uptime. Ultimately, you can only measure what you have achieved with the actual results, ie, your uptime / downtime against a certain period. 99.99 percent uptime means downtime of not more than 4 mins 19 seconds a month, or 51 minutes a year, etc. For an SLA calculator, you could use several available on the internet, one of which is :

    [/url]