For me its a case of "with great power comes great responsibility".
I have been having a similar debate on a related subject. It is basically a "How can we build something to stop people doing stupid stuff" debate. If you try and build an idiot proof system then apart from the problem of the universe instantly upgrading the idiot what you will end up doing is building an immense behemoth of code to prevent all sorts of eventualities. The sheer size of what you will end up building will introduce bugs and these may be worse than the thing you are trying to prevent. It will also mean that simple changes that should take 10 minutes will now take 6 months to deploy.
At some point you have to list your fears, assign risks and likelihoods and take the steps appropriate to those risks and likelihoods. Sometimes those steps are not code based, they are business process based. For example we used to worry about all the things that can go wrong in a large scale deployment. The engineering that used to go into the mechanism to deploy and roll stuff back was huge. Now we simply don't do large scale deployments. We design and engineer our systems to be deployed in relatively small incremental steps.
Then there is "Trust but verify". I look around my office and I can see people who are trusted with 6&7 figure mortgages, controlling 2 tonnes of metal travelling at 70ish miles an hour on a crowded road and bringing up children and sometimes children with disabilities. Yet such adults in an office work under a system that infantilises them.
Yes, the wrong restore can be disastrous but you would be better off making sure people know and understand when to do point-in-time recovery, when not to and what to check for before doing it. If only the priesthood can do stuff then you end up with single points of failure and a genuflection queue.