A group of people are protesting more cars in San Francisco by disabling self-driving cars with a cone on their hood. It is likely that the cone disables some sensors and won't let the cars move. For truly autonomous cars, this could be an issue if people could prevent them from working with small hacks. I could imagine a strip of duct tape or some other smaller obstacle that might also work and be less noticeable to anyone looking at the vehicle.
Ignoring the humor here, or the more-cars-in-cities debate, this is an interesting place where computing systems can fall down. People do find ways to hack sensors or other ways in which our software gets data from the real world, which can force behaviors we didn't expect. We might allow for these, or we might not. In this case, I don't know that the developers could do anything different, other than perhaps detect and alert the owners that the vehicle can't move until someone clears the obstacle. They could add more sensors, but I suspect hackers would just block those.
The bigger issue is that as we depend on computing devices that interact with and need to work in the real world, we'll have more problems like this. We might see simple hacks, or even everyday events that can interfere with the machines completing their tasks. If we get very dependent on these AIs, then these hacks become more than just vandalism. Imagine getting in this vehicle in an emergency to go to the hospital, and it won't move because a sensor is blocked? That could be more than inconvenient.
We certainly have similar issues with people in the real world. People flatten tires (how does an autonomous car change a tire), they padlock doors and gates, and they might do any number of things that interfere with others' ability to live their lives. Humans can often adapt to these issues with small behavior changes, but what about computing devices? Will a new class of jobs come about to correct issues like this? Will we see "fix-it" robots that autonomously move around and remove cones from hoods or change tires?
It's hard to even think about all the crazy permutations of how autonomous computing devices can evolve and the types of interactions we might have. I'm not even sure if I think that the way we are approaching AI and autonomous systems is good. We're often applying similar frameworks to them that we do for other humans, which I'm not sure if the best way to approach these decisions.
The future is looking very strange, and it's coming quickly. I'm not even sure what to think about issues like this.