Somewhat disturbing footage came out last week showing a pedestrian pushing a bike being run over by a Uber driver-less car. The footage shows that indeed the car failed to stop, despite the numerous sensors mounted on it that should have detected her.
As I discussed in a prior post, while self-driving cars are something of a new phenomenon, many driver assist safety features (traction control, breaking assist, etc.) have been around for quite sometime. While they don’t completely eliminate the risk of accidents, they’ve certainly helped contribute to making cars safer. So this accident raises a number of important questions, both about driverless car technology, but also about humans and human behaviour.
Firstly, its clear from the video that the victim (she was pushing a bike rather than riding one, so we’ll count her as a pedestrian) wasn’t paying attention to the road and wasn’t crossing at a designated crossing point. Even though its at night she doesn’t seem to look in the direction of the approaching car and its headlights (or respond to engine noise) until the last second. I don’t know about you, but I had the green cross code drilled into me from a very early age to the point where its now instinctive.
While driverless, the car did have a “driver” (or operator) inside as a backup. However in the seconds leading up to the accident she too was distracted and not paying attention to the road. She appeared to be fiddling with the settings of something (the radio, the software, its unclear, but certainly not paying attention to the road). So again, some fault has to be attributed to the operator as well. Indeed, this has all the marks of a standard road accident, two people not paying attention leading to an accident. The only oddity really is the driverless car element.
The reasons for the car not stopping aren’t clear. Its possible there was a fault with the sensors. Its possible that it simply didn’t have time to react. The expert opinion is that it should have, although I’d note that we don’t know the exact speed or road conditions. Also given that motorists don’t seem to have a clue how to handle cyclists and given that she had a bike, its possible the car’s software was the same.
Indeed one (remote) possibility we need to consider is that in fact the software worked perfectly. The computer could have concluded that it had no time to stop, trying to engage the brakes would have not prevented a collision but risked a skid or a rollover, which would have endangered the occupants of the car (as well as any other bystanders on the pavement or other road users) so it didn’t try to brake. In other words, cold as it might sound, the computer might have made the right call. Although that said, I’d point out that this is unlikely, as the expert opinion is that there was time to take avoiding action.
However, I am speculating on this last point because it brings up a dilemma that’s long been stated, what if a driver-less car finds itself in a no-win scenario? How will it respond? Is it ethical to led a computer decide on life or death decisions? At the moment, certainly this is a cause for concern as the technology is new and clearly has some way to go before it can be considered reliable. But once that learning curve is completed, the only difference between allowing a computer to make life or death decisions and a person is that the computer will do so more quickly and base its decision on logic and facts not emotion and instinct. The reality is we humans are very bad and making decisions in a crisis.
What is more relevant is how the presence of automation has or might influence the behaviour of the humans involved. The “driver” was clearly bored and not paying attention. Indeed, its possible the victim, having seen driverless cars whizzing past before simply assumed the car would stop. This is similar to problems we’ve seen in aviation.
On the one hand automation has made flying a lot safer. The miracle on the Hudson for example owes as much to the automation on the plane in question as it does to the skill of the pilots. But while the overall accident rate has gone down, there has emerged a whole new trend of accidents, often caused by bored inattentive pilots being slow to respond to a sudden crisis. Or can make catastrophic schoolboy errors, that a more attentive pilot won’t make (as illustrated by Air France Flight 447). Another problem has also been the pilots miss-interpreting the actions of the computer (or visa versa). “Mode confusion” being a problem, where for example the computer is in “landing mode” (and thus will try to land the plane) but the pilots don’t realise this leading to the plane behaving in an unexpected way.
Driverless cars present an even more challenging situation. Pilots are trained professionals who, while they are known to make mistakes from time to time, such errors are rare and rarely fatal (in part because the whole point of having two pilots up front is so that they are watching out for one another). And pilots usually have a block of altitude and time to try and deal with any problem. A “driver” in a driverless car is likely to not have anything like this level of training and will have milliseconds to react when something goes wrong.
And there’s also the legal issue to contend with. If this were a standard road accident, both driver and victim would be considered at fault (with the greater burden placed on the driver of course). However, if the car was at fault here (to some extend) then who is liable? Uber (who were operating the car) or Volvo (who made the car). I don’t profess to know the answer, but suffice to say I suspect a number of messy legal cases will come out of this and any future incidents.
It is this human factor (and legal factor) that I see as the problem with driverless cars, not the software. And its why I think we might be waiting a bit longer than some think before driverless cars take over the road.
Also the approach being taken is incorrect. I would instead focus on using the automation first in motorway like settings. Here traffic operates according to more predictable rules. As driverless cars can theoretically drive bumper to bumper at speeds greater than 70 mph, it would greatly reduce traffic congestion. One could see special driverless only speed lanes introduced on motorways, then eventually (as more cars with this technology are put onto the roads) the entire motorway network would be driverless, switching back to manual control on the slip roads. Once the technology matures enough it gets rolled out to virtually the entire road network (meaning manual controlled cars would essentially become illegal).
However, Uber in line with others pushing this technology, are going for a bottom up approach instead. This is much harder to do. And according to some motoring experts this approach will make congestion worse not better. And Uber are doing it for the narrow financial reason that they want to eliminate their cab drivers. And they want to do it quickly, probably too quickly for the technology or wider society to adapt.
So I think this accident does show a need for a bit of a rethink of the whole situation, but its not necessarily a reason to abandon the technology.