Where We Need Better AI Disclosure and Responsibility

  • The way to make computer systems accountable is by designing them using logic rather than "AI" or ML.

    Consider the steps taken my law makers in drawing up legislation. Although legal language sometimes seem somewhat convoluted, the aim is to make the law as unambiguous as possible.

    A computer system is a formalisation of these rules. Obviously the computer system as with a written rule is still open to interpretation - and therefore the rules should always be open to appeal and evalution by human beings.

    I find it hard to imagine a piece of legislation that could be derived precisely by feeding in a large amount data.

    Staying with the legal theme, has anyone brought an unfair dismissal case against companies that use automation to sack employees? If no clear explanation can be given for the reasons for the dismissal then obviously any court should immediately decide in the plaintiffs favour. I see "AI" and ML as creating a field day for lawyers.

    With an RDBMS we have a system that is directly based on logic. We can take advantage of this to build rule based systems that are fully accountable.

    Human beings can produce a system of unfathomable obscurity, but it takes years. It looks as if "AI" or ML can do this in a fraction of the time - this could be seen as an advance in productivity, but not productivity that actually does anything useful - quite the reverse.

  • A vehicle is subject to inspection by the relevant safety authorities before it goes on the market. As the "autopilot" is a new thing I expect that the necessary legislation is not yet in place to cover it. You might say the safe thing to do is to forbid its use until the legislation and necessary test procedures are in place.

    I am not against autonomous vehicles, but I have always been certain that it is far harder then its proponents make out. There have been frequent announcements from Waymo about how far their vehicles have travelled without an accident, but as far as I am aware there have been no tests by independent bodies.

    The Waymo autonomous taxi service in Phoenix has a central control where a human being can take control of the vehicle remotely. Interestingly Waymo refuse to reveal how many vehicles there are per controller. If it is one to one then the "autonomous" vehicle is effectively a drone. Drone taxis have advantages as the driver doesn't need to be with the vehicle, so any remote driver can take over, but it isn't autonomous.

    Computers can out-perform human beings in repetitive tasks in a well defined environments - industrial robots are successful for this reason. Getting robots to work in an open environment is much more difficult. As MIT robotics professor Rodney Brooks put it, in the struggle between robots and reality, reality is still winning.

  • I don't see why we couldn't issue and retract a driver's license from an AI in the same way we do people. If a company has to take all of their vehicles off the road due to reckless driving or other traffic violations, then I think they'd put a pretty high priority on safety.

    As for laws that's not applicable to an AI, apply them to the decision chain that put the vehicle on the market.

  • I think revoking a license to operate makes sense. We could set up a course and then ask the AI to get through the course, just like  we test human drivers.

    I would also like to see there being some set of situations and the reactions we expect, like recognizing street signs, knowing speed limits, going around stopped vehicles, etc.

    Certainly, laws ought to be included. Here in CO, if there's a police car, or other responder, you need to change to the next lane over, or slow to <=20mph

Viewing 6 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply