In terms of the case mentioned, stop and think about the distances involved. .4 feet is about 4.75 inches. 4.75 inches... Quite likely, your cell phones' screen is longer than that, so it's no wonder the officer thought the vehicle was closer than the 10ft limit. Now, what I could see potentially coming from this would be several things:
A) A method for the owner / operator of an autonomous vehicle (AV) to pull up the "black box" logging from the time in question, showing an officer they are correct or incorrect (think showing an officer dashcam video when you're involved in or witnessed an accident.)
B) A way for an officer to remotely pull up the logs of a vehicles sensors (and this one is scary from a privacy point of view)
On the main thrust of the editorial, perhaps the biggest challenge I can see will be ensuring the security and integrity of both the data and the methods used to collect it. I can easily see several potential points along the data flow where changes could be made. The other point to consider when believing or not believing the data is, as you pointed out, the context. ML systems are only as good as the data that is fed into them. How many times has a disagreement occurred between people, because one side didn't bring up a point that "everyone knows?" No matter how "smart" ML systems get, they'll only ever be as good as the data fed into them (Garbage In / Garbage Out,) and how well they were initially "trained" by the developers (whose own biases *will* creep into the system.)
Perhaps the best that anyone can expect to achieve with such systems will be "trust but verify..."