When talking about DevOps, the goal is to produce better software over time. Both better quality as well as a smoother process of getting bits to your clients. There are a number of metrics typically used to measure how well a software team is performing, and one of the things is Change fail percentage. This is the percentage of deployments that causes a failure in production, which means a hotfix or rollback is needed. Essentially we need to fail forward or roll back to get things working.
For most people, a failed deployment means downtime. I've caused a service to be down (or a page or an app) because of a code change I made. This includes the database, as a schema change could cause the application to fail. Maybe we've renamed something (always a bad idea) and the app hasn't updated. Maybe we added a new column to a table and some other code has an insert statement without a column list that won't run. There are any number of database changes that might require a hotfix or rollback and could be considered a failure.
However, some people see an expanded definition. If a service is degraded (slower), is that a failure? Some people think so. If we change code in a database (or indexes) and see performance slow down. In that case, is this a failed deployment? Customers would think so. Developers might not like this idea, at least not without some sort of SLA that might allow for some things to be a little slower. After all, slow is still working, right?
What if I don't notice a problem? Imagine I add a new table/column, and the app starts accepting data and storing it. What if we are supposed to use this data downstream, and we don't notice it is being aggregated incorrectly by a process until many days later. Perhaps we've performed some manipulation or calculation on our data and the result isn't what we wanted. It might not be incorrect, but maybe it's ignoring NULLs when we want NULLs treated as 0s.
Is that a failure? If I deploy today and Bob or Sue notices next week that the data isn't correct, that's a failure. I don't know I'd count downtime from today until next week, but from when Bob/Sue files a ticket, the clock starts on calculating the MTTR (mean time to recovery).
I don't often see database deployments failing from the "will it compile on the production server" standpoint. Most code gets tested on at least one other system, and with any sort of process, we catch those simple errors. More often than not, we find performance slowdowns or misunderstood requirements/specifications. In those cases, some of you might consider this a failure and some may not. I suppose it depends on whether these issues get triaged as important enough to fix.
While I might have a wide definition of deployment failures for most coding problems, I don't for a performance slowdown. Far too few people really pay attention to code performance and are happy to let bad code live in their production systems for years.