Some things and readings today reminded me of this: a security control inconsistently applied is not a control. The whole point of a security control is to provide a check in order to catch malicious behavior. Some controls are preventative. Others are detective or compensating. An example of a preventative control is before entering a building, all employees must badge in to ensure they are valid personnel. However, the way this security control can be inconsistently applied is if an employee allows someone to "tailgate," that is, enter the building without badging in. And here's how you can have a breakdown in the whole control.
The first person to a door has a valid badge and swipes it to be allowed in. The second person is allowed in by the first person and doesn't swipe a badge. Now, in a lot of organizations, if the second person just looks to have a valid badge, then it's considered okay to let 'em in. The control that's been given to the employee is "look for a badge and if they don't have one, send them to the security guard." But what if they do and said badge is forged? The instructions the employee has been given is if the person has a valid badge, let said person in. And it could be tht said person has other equipment/uniforms that make him or her look valid, such as this teen who impersonated a Chicago police officer. In that particular case, there were some things missing (badge, gun, etc.) that should have been noticed but because the kid was in a proper uniform (sans badge or firearm), knew enough of the jargon to sound authentic, etc, he was accepted until near the end of the shift when one suspicious person caught on to the ruse. This is the perfect example of social engineering because the control was not consistently applied (such as asking, "Where's your badge?" A more likely scenario for this discussion is someone coming towards the end of the work day dressed as a member of the cleaning crew. The employee sees the uniform and lets said person in, even though cleaning crew folks were supposed to be issued badges. The lack of a badge was never checked. You have an inconsistently applied control, meaning you have no control.
So what about IT security? If you have a control in place and it isn't consistently followed, how can you be sure it's going to be followed when someone intends on something malicious? You can't. Let's go with a classic example. A user (we'll go with a business analyst since DBAs are sometimes accused of picking on developers) is considered trustworthy and needs to troubleshoot a reporting problem in production and the DBA is in a hurry. Maybe the user only needs to hit 25 out of 500 tables in a particular database. None of those have any sensitive data. The procedure says to only grant access to the tables needed. However, the DBA is busy as there are other fires to put out, so he or she takes a short cut and makes said user a member of the db_datareader role. That means the user not only has access to the 25 tables needed to troubleshoot the problem, but all 500. And some of those 500 tables do have sensitive data. But the user is trustworthy and it's not an issue, right? After all, the user will figure out the problem and then the rights will be revoked. "No harm, no foul."
Fast forward a bit. On break, the user was browsing around, went to what was a legitimate site, but the site had been compromised. Malware taking advantage of a previously unknown or a known but not yet patched vulnerability was present and the system is compromised. While the user might be trustworthy, the person who set up the malware and now has the ability to control said computer is not. And remember, the user is logged in with his or her credentials. And those credentials have access to said database where the user has db_datareader rights. Which the attacker may notice (by running a netstat -an | findstr /i :1433 on the system and noticing the connection to the default instance of a SQL Server) . And the attacker puts in a script to interrogate said server, finds said database where user has db_datareader, and realizes he or she has access to some juicy info. At which point the hurried action of the DBA has now cost the company.
While I realize that the example I gave is a drawn out one and not likely to happen very often in practice, the problem is when it does happen, how do you put the pieces together to realize what was compromised? This is just a single example, but it reveals a bigger problem. Let's assume that the malware is discovered quickly on the workstation and it's sent down to be re-imaged. No one does any forensics because it's a malware problem that happens every day and the standard procedure is not to fool with it but just get the box back up in a timely manner. The problem is that no one other than the DBA knew what rights the user had. No one suspected the user had db_datareader rights because that would have violated a security control. Of course, the DBA is not privy to the day-to-day workings of the workstation support team (nor does he or she care to be), so said DBA has no idea the user's workstation was compromised. See the problem?
Scenarios like this are why an inconsistently applied control is not a control at all. There were controls in place to protect data security. They were bypassed because the user was "trustworthy." And because there was a security compromise that, at first glance, seemed no big deal, not only was there the potential for sensitive data to be lifted from the organization, but no one was the wiser to it. That is, until some of that data starts appearing on the Internet or a batch of folks get hit for identity theft and investigators trace it back to said organization. Had the data security control been followed appropriately in the first place, then there would truly have been "No harm, no foul" with respect to the sensitive data.