Secret Software Security

  • Comments posted to this topic are about the item Secret Software Security

  • The problem with vehicles is the long fix cycle which in some cases is infinite i.e. won't ever be fixed.

    We've seen the size of the problem that Volkswagen are facing with their emissions cheat software. It's so big that VW don't really know what it would take to fix it. VW are simply the ones that got caught.

    There is hacking with the owners permission (software modifications for performance tuning purposes) and hacking without permission. As one is enabled the other becomes possible.

    It's a bit of a quandary and the parallel is "Unsafe at any speed" by Ralph Nader"

  • I believe that a common framework is applicable:

      a) Security flaw identified by party p.

      b) Vendor notified.

      c) Party p free to issue details of security flaw.

    The key item is that the minimum time between steps b) and c) is specified at industry level. This time span can cater for the industry/sector e.g. online banking might be 30 days due to the severity combined with the ease of deployment whereas the automotive industry might be 14 months due to the time to fix and a 12 month period where all cars can be considered due for a service or inspection (MOT here in the UK).

    If the vendor does not act quick enough then they have knowingly allowed their customers to remain vulnerable yet if they do then the party that identified the issue can take credit for it (which I believe is important in the security sector).

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Gary Varga (11/2/2015)


    I believe that a common framework is applicable:

      a) Security flaw identified by party p.

      b) Vendor notified.

      c) Party p free to issue details of security flaw.

    The key item is that the minimum time between steps b) and c) is specified at industry level. This time span can cater for the industry/sector e.g. online banking might be 30 days due to the severity combined with the ease of deployment whereas the automotive industry might be 14 months due to the time to fix and a 12 month period where all cars can be considered due for a service or inspection (MOT here in the UK).

    If the vendor does not act quick enough then they have knowingly allowed their customers to remain vulnerable yet if they do then the party that identified the issue can take credit for it (which I believe is important in the security sector).

    This seems reasonable to me.

  • Iwas Bornready (11/2/2015)


    Gary Varga (11/2/2015)


    I believe that a common framework is applicable:

      a) Security flaw identified by party p.

      b) Vendor notified.

      c) Party p free to issue details of security flaw.

    The key item is that the minimum time between steps b) and c) is specified at industry level. This time span can cater for the industry/sector e.g. online banking might be 30 days due to the severity combined with the ease of deployment whereas the automotive industry might be 14 months due to the time to fix and a 12 month period where all cars can be considered due for a service or inspection (MOT here in the UK).

    If the vendor does not act quick enough then they have knowingly allowed their customers to remain vulnerable yet if they do then the party that identified the issue can take credit for it (which I believe is important in the security sector).

    This seems reasonable to me.

    Until party p uses the vulnerability to perform an attack during the time between B and C. Or parties q, r and s find the same thing and use the vulnerability during the same time.

    Companies do know about some of their vulnerabilities and are knowingly allow their customers to remain vulnerable. Then, party p claims credit for finding it and we're all left wondering why it wasn't announced 4 years earlier when it was found.

    I'm sure we all remember a recent ignition switch problem that was known 10 years beforehand. I know this isn't a vulnerability, but the root cause is the same - not addressing a known defect. In this case, people died.

    Don't get me wrong, I can't anticipate new attack methods and I cannot call myself an expert in all areas of security. But some of the known vulnerabilities should be handled during the original design. Time and cost get in the way and I don't have all the answers. I believe it will take some severe consequences to make companies wake up and take security more seriously.

  • Ed Wagner (11/2/2015)


    Iwas Bornready (11/2/2015)


    Gary Varga (11/2/2015)


    I believe that a common framework is applicable:

      a) Security flaw identified by party p.

      b) Vendor notified.

      c) Party p free to issue details of security flaw.

    The key item is that the minimum time between steps b) and c) is specified at industry level. This time span can cater for the industry/sector e.g. online banking might be 30 days due to the severity combined with the ease of deployment whereas the automotive industry might be 14 months due to the time to fix and a 12 month period where all cars can be considered due for a service or inspection (MOT here in the UK).

    If the vendor does not act quick enough then they have knowingly allowed their customers to remain vulnerable yet if they do then the party that identified the issue can take credit for it (which I believe is important in the security sector).

    This seems reasonable to me.

    Until party p uses the vulnerability to perform an attack during the time between B and C. Or parties q, r and s find the same thing and use the vulnerability during the same time.

    Companies do know about some of their vulnerabilities and are knowingly allow their customers to remain vulnerable. Then, party p claims credit for finding it and we're all left wondering why it wasn't announced 4 years earlier when it was found.

    I'm sure we all remember a recent ignition switch problem that was known 10 years beforehand. I know this isn't a vulnerability, but the root cause is the same - not addressing a known defect. In this case, people died.

    Don't get me wrong, I can't anticipate new attack methods and I cannot call myself an expert in all areas of security. But some of the known vulnerabilities should be handled during the original design. Time and cost get in the way and I don't have all the answers. I believe it will take some severe consequences to make companies wake up and take security more seriously.

    I agree that as much as possible should be "handled during the original design" but I guess we are talking about what hasn't.

    It is no change to the current situation that p, q, r and s can exploit the functionality.

    The difference here is that p wants to announce the vulnerability but the vendor demands silence. The suggested framework just allows for periods of both in order to mitigate both the risk of a public vulnerability and the risk of a never dealt with private one.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • ABSOLUTELY release the information!

    Companies need to start putting forth effort with security, right now they don't. Embarrassing revelations about the lack of security might help.

    The argument against releasing the information is similar to the arguments for gun control. If you outlaw guns, only outlaws will have guns. If you don't share information about auto company's lack of security and (extremely) poor designs, only outlaws will be aware of the exploits!

    To put it another way, how long before we finally admit that criminals are going to find a way to do what they want to do? Hiding under the covers only works for little kids.

    The other point that needs to be made is that they auto companies need to be held responsible for the idiots that designed systems that would allow someone to control your vehicle through the radio. Seriously? It never occurred to them, with all of the news about security vulnerabilities, that someone might take advantage of an open invitation like that?

    Dave

  • Disclose to the vendor, then disclose to the public at some later time, using 3 months.

    Vehicles are no different in principle. Recalls happen, and a security fix demands a recall. Don't like it? Don't make cars, it's just that simple.

    Inflexible? Oh yes, but then again vehicles are very good at killing people even without the help of some sick bastard who gets their jollies by seeing how many cars he can pile up on the interstate.

    With great power, yada yada. Not just pretty words...

  • David.Poole (11/2/2015)


    The problem with vehicles is the long fix cycle which in some cases is infinite i.e. won't ever be fixed.

    Is the problem things won't get fixed? Or that we don't allow someone other than vendors to fix code? Perhaps third parties should be allowed?

  • Gary Varga (11/2/2015)


    I believe that a common framework is applicable:

      a) Security flaw identified by party p.

      b) Vendor notified.

      c) Party p free to issue details of security flaw.

    The key item is that the minimum time between steps b) and c) is specified at industry level. This time span can cater for the industry/sector e.g. online banking might be 30 days due to the severity combined with the ease of deployment whereas the automotive industry might be 14 months due to the time to fix and a 12 month period where all cars can be considered due for a service or inspection (MOT here in the UK).

    If the vendor does not act quick enough then they have knowingly allowed their customers to remain vulnerable yet if they do then the party that identified the issue can take credit for it (which I believe is important in the security sector).

    I think 12-14 months, with more connected cars, is unreasonable. Personally I think 90 days should be good, 180 at the outside. Patches can be issued, and there should be standing test systems ready, with vendors having staff to evaluate from a security standpoint, or perhaps an industry standard group that can review things.

  • djackson 22568 (11/2/2015)


    ABSOLUTELY release the information!

    ...

    The other point that needs to be made is that they auto companies need to be held responsible for the idiots that designed systems that would allow someone to control your vehicle through the radio. Seriously? It never occurred to them, with all of the news about security vulnerabilities, that someone might take advantage of an open invitation like that?

    Completely agree here. They should have resources devoted to this, with insurance being in place to force them to respond to issues. I'd like to see recall penalties after 90 days of reports.

  • Steve Jones - SSC Editor (11/2/2015)


    djackson 22568 (11/2/2015)


    ABSOLUTELY release the information!

    ...

    The other point that needs to be made is that they auto companies need to be held responsible for the idiots that designed systems that would allow someone to control your vehicle through the radio. Seriously? It never occurred to them, with all of the news about security vulnerabilities, that someone might take advantage of an open invitation like that?

    Completely agree here. They should have resources devoted to this, with insurance being in place to force them to respond to issues. I'd like to see recall penalties after 90 days of reports.

    I'd agree if they're real penalties. I don't want to see some minimal, slap-on-the-wrist applied to something serious. I also have to agree with your 90-day policy. It should simply be a cost of doing business and they shouldn't be allowed to blame a third-party company that no longer exists. If they did it right the first time, they could avoid the cost. So they wouldn't be first-to-market with a self-driving car...At least it would work right and people wouldn't die.

  • Cars are a definite problem. There was the recent Jeep hack where Chrysler/Fiat sat on the vulnerability until the researchers got tired of their stonewalling and went public in Wired. THEN Chrysler did a recall and mailed out USB sticks for owners to do updates. Teslas have it easy as all the cars have a cellular connection and can be patched remotely.

    This is one of the problems with the Internet of Things. If I have an IoT toaster, a low-margin item, is the vendor going to spend a lot of time to monitor it for vulnerabilities as time goes by? Is the OS even updatable? It doesn't seem like much, but when you consider all of the easily hacked routers, what's to prevent an idiot from going in and turning my toaster on high after I leave for work and burning down my house? A recent article talked about a worm that went around inoculating insecure wireless routers to prevent them from being compromised, the problem is that no one knows who this party was, so they could be just setting up those routers for their own nefarious ends later on.

    The fundamental problem is that the people who created the internet were trusting in the better nature of people and didn't try to build robust security from the ground up. So we're left with pre-perforated protocols to try to secure. And the security is improving, but there's still far more problems than solutions.

    There was a recent Onion article that China is having problems hiring enough hackers because the USA is discovering more vulnerabilities too quickly.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • Steve Jones - SSC Editor (11/2/2015)


    djackson 22568 (11/2/2015)


    ABSOLUTELY release the information!

    ...

    The other point that needs to be made is that they auto companies need to be held responsible for the idiots that designed systems that would allow someone to control your vehicle through the radio. Seriously? It never occurred to them, with all of the news about security vulnerabilities, that someone might take advantage of an open invitation like that?

    Completely agree here. They should have resources devoted to this, with insurance being in place to force them to respond to issues. I'd like to see recall penalties after 90 days of reports.

    Since everybody will not bring their vehicle in (some percentage won't do so), how about this - treat it the same as firmware uploads for your router.

    They have the ability to connect to your auto (car) processor through some terminal. Require that they allow connections through the USB port that new vehicles have on the dash. Allow the consumer to download a file, and upload it to the vehicle to patch flaws.

    Yeah, yeah, someone will point out that this could be an issue if it breaks the car. I am choosing to ignore that side for now, although I recognize it is a concern. My point is that I am not going to go to the dealer every time a patch or recall exists, because I don't want to be a captive audience for them to sell me something I don't want. So force the manufacturer to provide a fix that the owner can implement, maybe allow me to pay for my trusted local mechanic to do it for me. Basically, don't require me to go to the dealer!

    Dave

  • Wayne West (11/2/2015)


    Cars are a definite problem. There was the recent Jeep hack where Chrysler/Fiat sat on the vulnerability until the researchers got tired of their stonewalling and went public in Wired. THEN Chrysler did a recall and mailed out USB sticks for owners to do updates. Teslas have it easy as all the cars have a cellular connection and can be patched remotely.

    This is one of the problems with the Internet of Things. If I have an IoT toaster, a low-margin item, is the vendor going to spend a lot of time to monitor it for vulnerabilities as time goes by? Is the OS even updatable? It doesn't seem like much, but when you consider all of the easily hacked routers, what's to prevent an idiot from going in and turning my toaster on high after I leave for work and burning down my house? A recent article talked about a worm that went around inoculating insecure wireless routers to prevent them from being compromised, the problem is that no one knows who this party was, so they could be just setting up those routers for their own nefarious ends later on.

    The fundamental problem is that the people who created the internet were trusting in the better nature of people and didn't try to build robust security from the ground up. So we're left with pre-perforated protocols to try to secure. And the security is improving, but there's still far more problems than solutions.

    There was a recent Onion article that China is having problems hiring enough hackers because the USA is discovering more vulnerabilities too quickly.

    Awe, I feel so bad for china... Not! However given how poor the US is at patching, I have my doubts as to how true this is. Given the complete lack of truthfulness in the media, I wouldn't be surprised to find that the story had zero to little truth in it.

    Dave

Viewing 15 posts - 1 through 15 (of 25 total)

You must be logged in to reply to this topic. Login to reply