Every year, about twenty million people die or suffer serious horrific (and permanent) injury every year, worldwide. With self-driving cars, a McKinsey study estimates that this number will be reduced by 90% using self-driving cars. That’s 18 million lives saved annually. Plus, $190 billion dollars in costs just in the United States alone. So why aren’t we moving faster towards autonomous vehicles?
The common consensus is that self-driving cars are not ready. Their ability to drive is not up to par with humans. Plus, given all the variability in the system, how can a machine compete, right? Think about all the unexpected events or odd driver behaviors on the road. How will a machine react to the unexpected? That’s why we need human drivers, right? Wrong! Ironically, the best way to reduce variability in the system is to ban human drivers. Why? We just need to look at what happened with Tesla on autopilot.
Cruising around in his Tesla, a man set his to autopilot because he was busy watching a Harry Potter movie. (No joke.) Ahead on the highway was a truck with a trailer. However, the man was so engrossed in his Harry Potter movie to notice that he was speeding right for it. Worse, the Tesla autopilot never noticed it either. The car plowed through, ripping the top off the Tesla off. How on earth did this happen? Well, the truck bed was a whitish-grey color, and it was a cloudy day. As a result, to the Tesla cameras, the truck bed blended into the background, so it was essentially invisible to the autopilot. That’s why it never stopped.
This was a hard lesson learned for autonomous vehicle manufacturers. As humans, we rely on our eyesight (or vision if you prefer.) Thus, the first vehicles relied on computer vision. However, machines think differently than we do. A self-driving car can process radar data. It can also process LIDAR data, information from IoT sensors in the road, traffic, other cars, auditory (imagine being able to hear the little kid about to run across the street before you see the child before the scamper across the street), GPS information, and (yes, still using) camera data.
With these capability improvements, we have seen the expansion of autonomous vehicles. Singapore, for example, has been using self-driving taxis and buses for nearly three years. Before Covid struck, several autonomous car manufacturers were approaching level 4 autonomation in China, which has some of the most complex traffic systems and large amount of erratic behavior. In fact, it was considered so reliable that the Chinese government was seriously considering legalizing these self-driving cars for regular use. Even in California, autonomous vehicles have been cleared for use on their freeway systems. Why so much trust and faith in these vehicles?
Self-driving cars are processing thousands of real-time data points every second. In addition, they don’t get distracted. They’re not wondering what’s happening on this project, what to make the kids for dinner, or even why that person ahead keeps swerving out their lane. In essence, autonomous vehicles reduce the variability on the road. And if they’re creating less variability, that means that human drivers would be the primary source for trouble. That’s why the question is shifting from when do we legalize self-driving cars to when do we ban human drivers?
Many people believe that we should only legalize self-driving cars when they can operate perfectly. This is an impossible goal. Machines, computers, AI (call it what you like) will never achieve perfection. This should not be the goal. Instead, we should focus on effectiveness. There’s a lot of attention on the one in a billion-chance event happening where a human might statistically perform better in avoiding an accident. However, there is too little focus on the more frequent causes of auto injuries and fatalities that impact 18 million lives each year that could have been avoided. As a society, we need to ask ourselves what the more important consideration is. Therefore, the real question is when do we ban human drivers?