It’s not “scripting” that makes mistakes easier... The real problem is "urgency of an issue". The same can happen when deploying code of any type. You covered what the real problem is very well. It all boils down to following a correct process no matter the size of the code or the scope of the issue. The issue with that is way too many people object to the innate “slowness” of any process.
And, as I’ve related to many of my peers, “it’s the slowness of the process that seriously helps prevent mistakes”. It gives people time to test properly, perhaps realizing that there's a better way during the testing process, and it gives people time to consider what a back-out needs to look like and other "Ok... what's Plan B" contingencies. And that doesn't apply just to complex stuff. As implied by the title of this good article, the "small stuff" that's supposedly "super simple" and "easy to implement" is where I find that most mistakes occur because someone wants something right now and "this simple code should do the trick".
Everyone stresses things like "productivity" and "efficiency" and "throughput" and "responsiveness", etc, etc, ad infinitum. How much productivity is there when your system goes down for 9 days or even just 10 minutes? How many customers do you have to piss off before you "get it". How many times do people have to go through this corner of hell before they finally realize that “”if you want something real bad, that’s the way you’ll usually get it”.
When will people finally realize that, quite frequently (usually, IMHO), the best way to do something faster is to slow down and do it right the first time?