Microsoft has spent a lot of resources working to ensure their software can be automated, audited, and configured securely. After the SQL Slammer worm, there was an effort made by the organization to code more securely. Secure by design and by default was the goal, and they've continued in the years since to try and ensure we can better secure our systems. There's even a Zero Trust methodology that they push these days.
Microsoft is notoriously strict with security, disallowing networking to potential threats, and they encourage you to do this as well. They have SAWs and PAWs and auditing recommendations and advanced threat protection. They have an incredible security operations center, which I'd like to think they use for their internal resources.
And yet, we have Microsoft support mis-configuring a database and exposing customer PII. How does this happen when security is a big part of the Microsoft business, and something that many of us rely on them doing well? This was human error, an issue with network security rules, and likely due to the complexity of these rules, because managing these and understanding the end result of a large set of rules is difficult for a human.
Still, it's disappointing and daunting. As I talk with people about DevOps and ensuring we automate our best practices, I find many people embracing the idea of using software to ensure our systems are set up consistently and securely, in the same way in all environments. I find plenty of people that prefer having the computer make the changes for them and removing the chance they make a mistake.
However, if all of the staff doesn't buy into the new process, it opens up potential places for problems. One human and cause problems by circumventing the system. In this case, Microsoft admits they have solutions to prevent and detect this, but they were not enabled. Again, this was likely a human mistake, but this isn't clear from their post.
Perhaps the most disappointing thing about this data breech is the lack of details. I'd like to know what rules failed, and maybe more importantly, what systems are in place to protect against this and how are they configured. This is a great opportunity for Microsoft to share some knowledge and educate their customers, but they didn't take it. Specifics on the failure would help many of us ensure we don't fall victim to the same issue.
Security is hard. It's a constant, ongoing battle to ensure we are following best practices, as well as learning and updating our knowledge all the time. If Microsoft can't do it, can we hope to? Maybe more importantly, if Microsoft doesn't help us understand how do secure systems well, with their experts and specialists, can we hope to do better?