The Security Payoff

  • Comments posted to this topic are about the item The Security Payoff

  • Great points made. One obvious vulnerability in a system of "bounties as standard practice" is the temptation to pair up, where one member gets employed and writes hackable code, while the other hacks it. This is what we have long joked (and half suspected) Symantec, McAfee, and other anti-virus software manufacturers of doing--the guy in one room writes the virus, the guy in the next room writes the anti-virus. I am pleased to hear what United did, though.

    And as far as punitive damages rewarded to victims of crappy code...hm...it's too juicy the way it is now with all the cat-and-mouse and subsequent embarrassment. I am, however, very upset that every taxpayer living under my roof has received letters from the IRS saying that their website was hacked and all of our social security numbers, tax records, and financial information was stolen. This was obviously because of their negligence, but I'm not allowed to sue or fine them for it. No, instead we have to file everything by paper and snail mail, and to go through additional verification steps with every correspondence. This is because they did us the favor of "flagging" our accounts.

  • reminds me of this Dilbert cartoon

    http://dilbert.com/strip/1995-11-13

  • I hope that we find a way to mature this industry and start to build better, standard, well engineered practices and habits that encourage secure, robust, well written code.

    One of the biggest hindrances to the above is all the disagreements over what exactly the "better, standard, well engineered practices and habits" should be, some of which make theological and/or political disagreements seem like the pinnacle of amity and brotherly love.

    Just google "never use stored procedures" for a 'simple' example.

    ____________
    Just my $0.02 from over here in the cheap seats of the peanut gallery - please adjust for inflation and/or your local currency.

  • I'm just sick and tired of having to lock my door. No matter how good a lock I get, there's always someone out there that can figure a way through/around it.

  • I'm not sure "coding better" is the answer. I'll grant that there are some basics that are often missed (sql injection!), but often the vulnerabilities are in the installation or configuration. SQL Server has a good track record for security, but install it public facing on 1433 with a blank SA password and you have a mess in the making. On the coding side static and dynamic scans can catch a lot of stuff (not all security related, but all good), but internal and external vulnerability scans are the best insurance against a human slip, the kind that opens the wrong port or IP. I've seen orgs that run those before a server goes into production, then monthly/quarterly after that. I'm an advocate of running them every day! Of course even scans don't guarantee that you'll find a vulnerability or detect a breach, but it's a strong card to play.

    There are always going to be people that want to take rather than earn and people that will go to extraordinary lengths to do that, something I'm reminded of when I see the occasional story about someone dragging a free standing ATM out of a store and down the road. I'm not at all sure that we guard our data/systems proportionally to the value.

  • Offering bounties to the public is good, but offering bounties to the company's internal IT staff would be even more effective. Nail down these security issues before the system goes live.

    However, we have to be careful about offering large cash bounties. It makes me wonder if there could be incentive for a fraudulent software engineer or network admin within the company to "accidentally" leave a hole open just so a cohort working on the outside can "discover" it, report it, and then split the reward.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Steve, et. al., I've got to ask a clarifying question. Are you speaking about applications that are outward facing? Perhaps what you're saying may not apply to in-house only applications?

    The reason why I'm asking is that in this new state job I'm in, I work for one of the largest departments in my state's government. Over many years various small divisions throughout the state have written their own applications to do their jobs. Since I've only recently joined I can't say why or what all of their reasons were. I'm guessing they didn't have the support from a centralized IT that we have in place now. Or they were so far out in the boondocks that they just had to write their own. All of these applications are Microsoft Access apps. I've seen only 3 in the few months I've been here, but it's my understanding that the number of rogue MS Access apps throughout this department numbers into the hundreds. The apps I've seen are horribly written, but they were written by people who were at least knowledge workers, not programmers. I've got years of experience and a lot of training. The people who wrote what we're having to deal with weren't like me. They probably had to do the best they could with what they had. I'm becoming more charitable towards them because they were probably working under a directive to get something done, with almost no resources to do it with. Hey, under the circumstances they didn't do too bad.

    But there's nothing like security built into any of these things. On the other hand none of them are outward facing at all. Only one of them has a Intranet website, the rest rely upon the MS Access as it's frontend. I'm guessing what they had to do is record whatever data it is they needed to track, but the deliverable was a report. Most likely either printed out and handed in, or at best exported and sent as a file attachment to an email. Under these circumstances do you feel as though there's still a need for the security you describe in this article?

    Kindest Regards, Rod Connect with me on LinkedIn.

  • andrew_dale (7/30/2015)


    reminds me of this Dilbert cartoon

    http://dilbert.com/strip/1995-11-13

    Exactly. Fun cartoon.

  • scoan (7/30/2015)


    andrew_dale (7/30/2015)


    reminds me of this Dilbert cartoon

    http://dilbert.com/strip/1995-11-13

    Exactly. Fun cartoon.

    I pride myself in the average number of bug free lines of code I write per day. My productivity really shot up when I found a T-SQL formatter that would automatically insert line feeds after each comma. 🙂

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • scoan (7/30/2015)


    andrew_dale (7/30/2015)


    reminds me of this Dilbert cartoon

    http://dilbert.com/strip/1995-11-13

    Exactly. Fun cartoon.

    I remember that Dilbert. Very funny.

    Kindest Regards, Rod Connect with me on LinkedIn.

  • Eric M Russell (7/30/2015)


    Offering bounties to the public is good, but offering bounties to the company's internal IT staff would be even more effective. Nail down these security issues before the system goes live.

    However, we have to be careful about offering large cash bounties. It makes me wonder if there could be incentive for a fraudulent software engineer or network admin within the company to "accidentally" leave a hole open just so a cohort working on the outside can "discover" it, report it, and then split the reward.

    good idea. offering something internally is good as long as you're careful about people gaming the system. Easy for someone to create bugs for their friends to find.

  • Rod at work (7/30/2015)


    Steve, et. al., I've got to ask a clarifying question. Are you speaking about applications that are outward facing? Perhaps what you're saying may not apply to in-house only applications?

    I think you need to code securely (and config securely) as much as possible. Plenty of the breaches have occurred because one machine lets an attacker in and they then move through internal systems. Assuming those are protected is a bad idea.

  • Steve Jones - SSC Editor (7/30/2015)


    I think you need to code securely (and config securely) as much as possible. Plenty of the breaches have occurred because one machine lets an attacker in and they then move through internal systems. Assuming those are protected is a bad idea.

    Yeah. Microsoft has been grooming us for years with this distinction between trusted connections and non-trusted connections. But I wonder, is that legit? I can imagine us all using candid comments in our code. If we were to add a comment regarding authentication, and if that comment were to consider the ramifications of an intruder surreptitiously accessing this code at this juncture, would we say something like, "Maybe should re-authenticate, but if someone gets here we're f*d anyway"?

    Having developed several systems requiring security over the years, mostly sensitive information having to do with salary and demographics for HR, I find that at some level every system finds itself in a vulnerable state. The question is whether that vulnerability is warranted--not unlike a personal situation with a spouse or significant other or trusted friend or professional counselor. The best I can do as a developer is document very clearly what's at stake if someone accesses this code at that level of privilege. This is kind of a cop-out, but I develop systems; I don't run them; I don't provide insurance against social engineering...

  • If you are so much concerned about security you should start setting an example by using httpS always for sqlservercentral.com site.

    Every time I come to this site it prompts me to enter my emailid and password in a http form. It is too tiring to edit it to https EVERY DAMN TIME in the address bar. Sometimes I may forget and enter the password in the insecure http form.

Viewing 15 posts - 1 through 15 (of 19 total)

You must be logged in to reply to this topic. Login to reply