If your organization is like mine and includes TB scale tables, dozens of developers coding SQL, and hundreds of end users, then adding a new index should definitely be something that gets promoted through the formal change control process. An index creation process can run for hours, hog resources, and block user activity. Also, a new unique index based on invalid assumptions about key columns can break the application to the point where it's at a dead stop. However, when indexes are added post-production, it's typically in response to a performance issue, so it's not something like a new feature where folks mull it over for weeks and it sits on a back burner waiting for multiple approvals.
When I create a new index, it's created in development to confirm it's usable and doesn't break anything. Next, it's deployed to UAT which has a similar data volume as production and where I perform some additional tests to document that metrics like query duration, io, and cpu have improved, and also provides some expectation about how much resources and time the index creation itself will require for a production deployment. The result of that evidence based unit testing in development and UAT can then be used as the basis for getting sign off from management for the production deployment. Any process where you're creating indexes and deploying them to production in less than 24 hours is probably too risky, even if it's considered critical. There needs to be methodology in place.
"The universe is complicated and for the most part beyond your control, but your life is only as complicated as you choose it to be."