In the aftermath of the first fatality involving an autonomous vehicle, The Future Laboratory co-founder Martin Raymond argues that human interference is the biggest challenge facing self-driving car manufacturers.
Globally, around 1.3m people die each year in road crashes. In the US, 94% of car accidents are linked to poor decision-making or human error – low levels of vigilance or focus, or perception blindness. All three of these played a part in the death of Elaine Hertzberg, the first pedestrian to be killed by an autonomous vehicle, on 18 March 2018.
As dashcam footage shows, the ‘safety driver’ looks away from the windscreen as Hertzberg appears to looks straight ahead, while Uber’s Lidar technology, which uses lasers to detect and scan front and lateral movement, fails to notice her. The perfect storm, in other words, and one where yet again, human error – an inattentive driver, an unseeing pedestrian – played a significant part in the subsequent fatality.
But look at what’s happened since. The usual luddites have called for tests to be abandoned, for manual overrides to be made mandatory, and yes, you guessed it, for human drivers to be present at all times to intervene quickly in the event of a potential accident. But this is exactly what humans are incapable of doing. This was one of the key motivators behind introducing autonomous driving systems in the first place – to cut down on the number of preventable deaths caused by texting, drinking, drugs, fatigue and distraction; things autonomous systems, because they are autonomous, will never do.
‘But won’t robots make some very cold and logical decisions when it comes to deciding who will live or die on our future roads?’ I hear the same luddites say. The simple answer is that we don’t know, yet. But thanks to The Moral Machine, we do know what 4m humans globally would do when faced with a range of ‘trolley dilemma’ situations, which driverless cars will increasingly encounter.
We should be moving towards removing humans from behind the wheel altogether, and programming cars not to be better people, but to be more effective, less emotional machines
The majority of respondents said that they would swerve to avoid a young pedestrian, even if it meant killing an old-age pensioner. Worryingly, the results suggest that the older the pedestrian, the more disposable they are. Similarly, if one person was crossing the road legally and two illegally, 50% said they would swerve to avoid the one crossing legally, even if it meant hitting and killing the other two. Larger people were more likely to suffer a similar fate when slimmer people were involved, cats were more at risk than dogs, and non-pregnant women were more at risk than their pregnant counterparts.
Meanwhile, in Germany an executive at Mercedes-Benz caused a media furore when they suggested that autonomous Mercedes vehicles should look to save their own passengers, regardless of who is crossing the road. So what does all of this prove? Simply put, humans really aren’t the best drivers, or the best judges of character when it comes to matters of safety or morality, so why penalise driverless cars for not thinking and acting like people?
Surely, we should be moving towards removing humans from behind the wheel altogether, and programming cars not to be better people, but to be more effective, less emotional and non-judgemental machines. Even at their most logical, rational and inhuman, do we really believe that autonomous vehicles, once their programming glitches have been fixed, will cause the deaths of millions of people each year because they get distracted by another vehicle, fail to notice a turning cyclist, or decide mid-drive to send a text message? I doubt it. On the other hand, put a human behind the wheel, give them an override feature and watch how they perform in the scenarios shown in the Morality Machine. If you’re slim, pregnant, tall, or a dog, you may have a chance, but for higher survival rates across the spectrum, driverless cars seem to be the logical way forward.
For more on how businesses should act in an age of moral uncertainty, read our Morality Recoded macrotrend.