Corporate Hackers

  • Comments posted to this topic are about the item Corporate Hackers

  • Agreed, Steve. I think perhaps one of the biggest problems is that the report makes in implicit assumption that hackers are external. Most employees will realise there's a lot of system access logging going on (indeed, they often overestimate how much), and anyone wishing to obfuscate their access trail is, effectively, hacking from the inside.

    In my opinion, the most fundamental hurdle is actually deciding what you're trying to guard against. Or, to put it another way, work out and recognise the biggest threats. One of the most effective tools against social engineering attacks is education, but that's not going to happen until a company actually identifies what it wants to achieve with its security systems and processes.

    It all boils back to the same IT problem. Implement a tool 'cos what it does seems sensible is the tail wagging the dog. Look at what you're trying to achieve first, then implement the tools necessary to do that, and the dog/tail combo will work as designed.

    Semper in excretia, suus solum profundum variat

  • I think much of the internal hacking comes from inadequately defining the controls environment for our applications, leaving security concerns for the last minute. Assessing the risks of an application and the controls necessary to mitigate those risks should be one of the first steps in developing a new application. Who needs access to the data? Are there data privacy concerns with the data? What sort of auditing is required?

    In general, access to data should be explicitly authorized. This is facilitated by requiring all data to have a real person as an owner. The data owner is responsible for approving access to the data and ensuring appropriate, documented, controls exist to reduce risk to an acceptable level. The data owner should periodically review who has access to the data, and drop those users who no longer have a need to use the data.

    All data should be classified as to the importance to the organization, and how the data is to be used. This includes determining retention periods, which are often driven by statutory concerns. There should also be policies regarding working with data outside the normal environment. For example, there might be a policy that says personnel data will never be accessed outside the standard production and development environments. This means no transfer to flash drives, laptops, etc. If there is a real need, ensure that such data is encrypted on the flash drive or laptop.

    Most of this is common sense. The key is to understand that data security is sometimes incompatible with increased productivity, and that we occasionally will have to use the least convenient method to manage and use data.

  • Here in the northeast we recently had a data breach at the parent company of TJ Maxx and Marshalls. In this breach a couple guys simply drove through the parking lot with laptop(s) running and pulled thousands of credit card and holder information out of the air. As the story emerged though, it was made very clear that this was information that (#1) the company was not supposed to be storing and (#2) certainly never should have been transited across wireless networks. This gives rise to the greatest enemy of data security. An enemy so dangerous and pervasive, you can talk about any kind of security you want and I assure you, you will never defeat this enemy.

    That enemy? Stupidity.

    Personally, I am less interested in "improvements" in systems such as SQL Server to aid with security, and I am more interested in a great leap in common sense - the one resource that seems these days in very short supply.

    The city of Troy was allegedly a fortress. It was un-invadable. Its walls were solid stone. It was a city that in its time was said to be un-breachable. Then some idiot saw a large wooden horse just outside the city gates and opened the doors and rolled in the crudely made giant wooden horse never thinking "Hmmmm, thats just about big enough to hold a small army of invaders..." Shortly thereafter, Troy fell.

    Hence, as was in history, and is now, and (God help us) will be for what seems a long time to come - the greatest danger to man, whether guarding impenetrable gates, or trying to keep data secure - our enemy is stupidity - and you cannot program around that.

    Just once, just one time I would like to see a dialog message saying "What you are about to do is really stupid. Are you sure you want to do this?" THAT would go a lot farther in improving security than anything I have seen.

    There's no such thing as dumb questions, only poorly thought-out answers...
  • blandry (7/14/2009)

    Just once, just one time I would like to see a dialog message saying "What you are about to do is really stupid. Are you sure you want to do this?" THAT would go a lot farther in improving security than anything I have seen.

    Point taken, but you're assuming they would read the message, instead of just clicking Yes/Ok.

    ---------------------------------------------------------
    How best to post your question[/url]
    How to post performance problems[/url]
    Tally Table:What it is and how it replaces a loop[/url]

    "stewsterl 80804 (10/16/2009)I guess when you stop and try to understand the solution provided you not only learn, but save yourself some headaches when you need to make any slight changes."

  • blandry (7/14/2009)


    Just once, just one time I would like to see a dialog message saying "What you are about to do is really stupid. Are you sure you want to do this?" THAT would go a lot farther in improving security than anything I have seen.

    I did include a message similar to this in one of my applications. Most users of the application found it humorous, one took offense and I was asked to soften (dumb down) the message.

    I agree that software systems will only go so far in solving security problems. Common sense, or perhaps, user education and continued re-education, is needed to complete the loop.

  • I'll agree with most of the thread comments: trying to implement meaningful security is a people problem, and a business problem, and a discipline problem. Software problems are very rarely more important than the other problems.

    This gets more true the more effective you assume your opposition is. In most cases, people assume their opposition is random ("(Who would)(Why would anyone)(No-one would) want to break in here"), unskilled ("Who would do that"), and/or too difficult to even consider dealing with ("Two people working together? We can't possibly deal with that").

    Proper security requires constant vigilance by all users, constant discipline (behavior in accord with the rules of conduct), and constant enforcement of that. None of that is easy. All of that has to be backed up by, and followed by, the very highest levels of the organization.

    Software tools are merely a small part of security; almost the least part. Physical and communications security is almost always more important (physical access and moderate capability renders most levels of software security irrelevant, including encryption if your system is still running and decrypting data without human intervention).

    Most companies I've been exposed to are between completely and mostly insecure if you assume your opposition is semi-competent industrial espionage. Only one vendor I ever worked with actually did so much as a phone call to verify the fingerprint of an encryption key (i.e. to see if we, or at least someone at the phone number they had for us, had actually sent the key they received).

    In a somewhat extreme, but in some industries ideal model, "No, Mr. President, neither you nor the CEO may see that information, much less download it to your laptop. HR can see this half, and the CFO can see that half." note only doesn't get you fired, but also is the correct and expected response.

  • blandry (7/14/2009)

    Just once, just one time I would like to see a dialog message saying "What you are about to do is really stupid. Are you sure you want to do this?" THAT would go a lot farther in improving security than anything I have seen.

    The only "are you sure" messages that matter are ones that take some level of mental interaction, i.e. "Type in the Employee ID in this text box, and then click the button for the corresponding employee". Single misclicks and typos are protected against. Looking at the wrong line is not.

  • The other problem with warnings is that if you see them too much, you just click and stop paying attention.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply