Blog Post

T-SQL Tuesday/Wednesday #21: A Crappy Workaround

,

T-SQL Tuesday, er, WednesdayA long time ago (in IT terms) in a galaxy far, far away I was helping an organization with a pretty nasty worm outbreak. The worm in question took advantage of a Windows vulnerability and it acted so fast that antivirus couldn't kill it fast enough. It was exploiting a communications mechanism within Windows, writing a few files to the disk, and then executing those files. And it did this really, really quickly. The sad things was that the worm in question  have been infeffective, if the Windows machines had been kept patched up. But they weren't, due to bad policies with respect to patch management. And this particular client wasn't interested in remediating the root cause because that would mean core business would be affected (due to servers being forced to reboot). See, the worm was a nuisance, it's more malicous capabilities were blocked because of other defenses in the environment, but they couldn't keep it from spreading.

When I started to help, I found out we couldn't patch. Antivirus couldn't keep up. And just as soon as we'd help AV manually clean one server and log off, that server was re-infected. What a nightmare. The domain controllers were especially problematic because there wasn't antivirus installed on them. Nor was the client's AV vendor supported by Microsoft on domain controllers. And while the DCs were fault-tolerant because enough of them were deployed, management didn't want to patch just part of the environment. Management wanted it fixed, patching excluded, which meant the real vulnerability couldn't be fixed.

This was when we resorted to a really bad hack (the second option of this T-SQL Wednesday challenge), but it was the only option it had left. We read the write-ups on the worm and also watched in what order it dropped the files on a server when it infected said server. Then we subsequently killed the processes spawned and deleted the files in FIFO order, created 0 byte files with the exact same names in the exact locations, and then altered the permissions to deny everything from everybody.  Slowly, one server at a time, we reclaimed our territory. It took us almost 3 days of fighting the outbreak but finally our hack put it at bay. In case you're wondering, a lot of worms today randomly pick names to stop this sort of solution from working.

So what was the mistake and what did I learn from it? It looks like the hack worked, right? Absolutely, but the real mistake was the inability to communicate the issue of not being patched properly. The technicians were talking in the jargon and lingo technicians use. That meant the business had absolutely no clue of how important patching was. Nowadays this isn't an issue in most organizations but the root problem still is: the ability to communicate clearly and in an understandable manner to business.

We can have the greatest technical solution in the world to our organization's biggest pain point, but if we can't communicate it clearly to our audience, we face the very likely scenario of being ignored. That was a hard lesson we all took back that week. It forced us to face the reality that as good as we might be on the technical side, that was effectively useless for getting the business to sign off on proactive solutions if we couldn't get our message across. All of us involved spent quite a bit of time talking about how better to do this. In reality, it boiled down to spending more time with the business side, which is an area a lot of technicians neglect to their detriment.

 

Rate

You rated this post out of 5. Change rating

Share

Share

Rate

You rated this post out of 5. Change rating