Giving Computers Ethics

  • Scenario: Someone steps out in front of your car. It stops as it's programmed to. An Ambulance approaches from behind with sirens going. Your car is obstructing the way. The Pedestrian is stood there, looking at the Ambulance, oblivious to you and to them being the ultimate cause of the obstruction. Should the AI:

    1. sound the horn then move forwards: The ambulance is more important than a pedestrian.
    2. sound the horn and wait for the pedestrian to move.
    3. instruct the passenger to decide.

    Now consider the passenger is blind/deaf.

    Scenario: There's an accident where a driverless car hits and kills several pedestrians due to an undetected fault in the front sensor array. Who is responsible?

    1. the passenger (the car was driving autonomously)
    2. the owner (who might not be the passenger at this point)
    3. the manufacturer
    4. the AI
    5. no one

    These are the kind of questions that need answering as part of the morals of AI drivers. This software is making the decisions, after all, so it needs to handle exceptional situations, or their use has to be limited to effectively being an autopilot (much as they are now) so a qualified driver needs to be on hand to take over.

    And a frivolous one to finish with:

    Does the AI need a driving license?

  • Unfortunately, we see a case where people are giving way too much credit to technology. The car avoids the obstacle. In the case of the ambulance coming from behind it is on its own to figure things out. If the car stopped for an obstacle (it really does NOT know this is a human) can safely move it does. Otherwise it sits there blocked. Technology should NOT attempt to solve for every possible stupid human trick.

    "What happens when a collision is unavoidable?" is a fallacy. The simple answer is that the car will brake and possibly modify the course to attempt to minimize the collision. But unlike humans the car won't be distracted talking to its spouse while changing the radio station and checking its eyebrows.

  • My prediction is that for the first five years or so, the software driving driver-less cars will play it safe and come to a complete stop when confronted with a scenario that falls outside safe operating parameters, at least within downtown or residential areas, rather than attempting to navigating through or around it. If there is no human behind the wheel, it may be stuck in this suspended state for several minutes until things resolve themselves. Folks will complain about "stupid driver-less cars". Maybe there will be a feature allowing an employee to take over the driving remotely with 360 degree cameras, sort of like how unmanned military drones work now.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Eric M Russell wrote:

    My prediction is that for the first five years or so, the software driving driver-less cars will play it safe and come to a complete stop when confronted with a scenario that falls outside safe operating parameters, at least within downtown or residential areas, rather than attempting to navigating through or around it. If there is no human behind the wheel, it may be stuck in this suspended state for several minutes until things resolve themselves. Folks will complain about "stupid driver-less cars". Maybe there will be a feature allowing an employee to take over the driving remotely with 360 degree cameras, sort of like how unmanned military drones work now.

    does a car do a full emergency stop for a pigeon ? potentially injuring  the people in car behind who were (at their own risk) travelling too closely. cars can't make those choices. the car can only protect the people in the vehicle... the "pigeon" could have been just a shopping bag blown by the wind.

    about 25 years ago I sat in a harrier GR7 jumpjet (I didn't fly it) - the avionics and tracking systems came online and they could see every rabbit in the field in front of the hangar . how does the system decide the good bunny from the evil bunny?

    MVDBA

  • does a car do a full emergency stop for a pigeon ? potentially injuring  the people in car behind who were (at their own risk) travelling too closely. cars can't make those choices. the car can only protect the people in the vehicle... the "pigeon" could have been just a shopping bag blown by the wind.

    Now here's something that has happened a few times to me: I don't have a self-drive car, but I do have one of those new 'forward look breaking assistants', aka an auto-break. It's intended to sound an alarm if I'm too close to something, and it will break if I'm doing less than 30mph.

    only... todate, it's slammed the breaks on for a) rain, b) a car to the left of me turning left, c) the sign for the bend in the road ahead of me, d) I think it was a squirrel dashing across the road, but could have been a rat, e) the car that pulled out in front of me and sped off (I was already breaking when the alarm sounded and the brakes slammed on harder...) f) I think just because it felt like it - there was nothing in the road ahead at all... and that's about it. The alarm's sounded when I'm doing over 30mph for and of the former cases, plus cars that cut lanes, and again, some odd reasons I couldn't identify and as there was no impact, wasn't a problem. It's enough for me to consider driving at 31mph for safety (the alternative is to turn it off at the start of every trip, at which point, why have the system at all).

    So I agree with Kiwood: Tech is no substitute for a human brain, and at that point the idea of morality becomes moot: The AI simply lacks the understanding from the information it has available from which to make a moral decision. Even with ML and object recognition, there would simply be too many mistakes for morality to come into the equation. Indeed, it would simply add to the complexity and result in erratic, unreliable results.

    Much like the brake assistant wanting to stop me hitting rain.

    Answer: Don't have the systems override the driver. Have them inform the driver instead. Then they can learn from the driver and eventually, perhaps, they'll know enough to be able to drive safely and handle the odd unexpected event.

     

     

     

     

  • steve.powell 14027 wrote:

    Answer: Don't have the systems override the driver. Have them inform the driver instead. Then they can learn from the driver and eventually, perhaps, they'll know enough to be able to drive safely and handle the odd unexpected event.

    There is a a great TED talk on this - cars using sensors and cross relating with masked medical data to predict spinal injuries and possible seating positions. even better, sensors tracking if you are about to fall asleep at the wheel. I can't remember the exact details, but the data collected so far suggests that they can predict your little car snooze up to 7 seconds before you sleep.

    That data will build better warning systems and even though we don't get the benefit today, we will get a better "driver assisted car" in the next generation

    MVDBA

Viewing 6 posts - 16 through 20 (of 20 total)

You must be logged in to reply to this topic. Login to reply