When I started working in operations departments, we were always concerned about downtime. In a simpler world, often inside an organization, this meant was a particular machine working or not. We did have networking issues at times, but often we were measured by how often a server was not reachable from clients.
These days, with many machines often involved in backing an application, downtime can be a debate, but often we have particular places from which we can test if an application is down. Some services, like Slack, might test multiple parts of the application, which I like. However, ultimately for any of these, there could be a simple (up/down) or complex (up, down, degraded, maintenance, etc.) status.
I was listening to a DevOps talk recently from an Operations group that talked about how they prioritize and triage work. There are times that the amount of work during an incident overwhelms resources, so that they need to decide what to work on first, or who needs to work on what.
This group had the concept of impact, which essentially was a product of two values, downtime and blast radius. Blast radius was essentially the number of people affected, though sometimes this was weighted. Finance or sales person impact might be greater than average employee impact. They would do a calculation and decide where to focus time.
If one part of a website of say, 4 parts, was down, the impact could be lower than if the database is down. If a database is down, but 10 people are affected, this could be less important than a network issue affecting 100 people.
I think I've often intuited the number of people affected, but rarely have I thought about this directly. To me, this is a good calculation to have handy, with an awareness of how heavily the various systems are being used. While most of us aren't supporting something as widely use as Slack, we often are supporting both big and small systems, and having a way to rank the relative importance is handy in a crisis.