Can We Get Serious Now?” That’s how I feel about many of the optimistic claims made about autonomous vehicles (AV)…
“Can we get serious now?”
This is a line from one of the most emotionally charged scenes in the movie “Sully” featuring Tom Hanks portraying Capt. Chesley “Sully” Sullenberger. I’ve been thinking about those words a lot just recently, because that pretty much sums up how I feel about many of the claims made about autonomous vehicle (AV) technology.
The movie “Sully” scene: Can we get serious now?
Can we dispense with the magical thinking and groupthink delusion that autonomous vehicles (AVs) are going to “solve” traffic deaths any time soon? Can we cease with the BS that the problem is human error and the solution is AI and deep learning? Or that the missing link for AVs is more education?
Let’s consider a few relevant statistics:
- Annual vehicle miles traveled last year totaled more than three trillion in the U.S. and over ten trillion globally.
- More than 1.3 million people die on our roads and highways globally each year, with many times that number seriously injured.
- Light vehicles in operation total more than 1.5 billion globally, about 90% of which have no assisted or automated driving features whatsoever.
Yes, humans do suck at driving. But humans also drive a lot and the idea that replacing human drivers with AVs will result in road deaths magically dropping to zero isn’t education. It is propaganda.
After five years of development and the incineration of tens of billions of VC dollars, the AV industry today proposes a range of ideas, including: Last-mile delivery, autonomous trucks, fixed route AVs and robo-taxis. When we ask Who needs AVs?, the answer still appears to be who knows? Reality seems a very long way from the promised land of saving lives.
Jensen lays out his chips
If the dream of “self-driving” technology for the masses started in January 2009 as Google’s Self-Driving Car Project, then it ended on May 14, 2020 at Nvidia’s GPU Technology Conference (GTC) when CEO Jensen Huang announced a windshield NCAP chip for ADAS.
During the GTC keynote, presented this year from Jensen’s kitchen, we learned that the mighty Nvidia, the greatest of the great L4 autonomous driving advocates, has finally discovered ADAS. Auto executives at competitors including Xilinx, Mobileye (Intel), Renesas, TI, Toshiba, and NXP — many of which have a ten-year head start in ADAS — must be rolling on the floor laughing at this generous serving of volte-face à la Jensen.
While Nvidia may not yearn to be a commodity camera chip supplier, that is where the mass market demand lies and hence that is where any company seeking to succeed must position its products. It turns out the incredible Xavier SoC is just too incredible for most mainstream automakers — a fact EE Times readers knew about almost a year ago.
Evidence of the ending of the mass-market “self-driving” dream was mounting at CES in January. ZF CEO Wolf-Henning Scheider stated that he saw no viable business case for private ownership of AVs, with ZF focused on ADAS at L2 and L2+ (“hands-free” highway assist) for privately-owned cars and autonomous driving at L4 only for commercial fleet and transit applications.
I have a lot of respect for Nvidia and view its GPUs, AI and deep learning technologies as incredible. However in automotive Nvidia has demonstrated a mixture of naivety and arrogance and has engaged in too much educating and too little listening — although in the interests of fairness that’s an accusation that can be leveled at many other AV tech companies too.
In automotive, success comes from being close to and cooperating with influential bodies such as New Car Assessment Programs (NCAP); understanding that new technologies are always introduced slowly and cautiously; a willingness to openly embrace collaboration with competitors and academia to raise safety standards; and knowing that the automakers expect their semiconductor, software and IP suppliers to keep their heads down and their mouths shut. This just isn’t part of Nvidia’s DNA – as GTC and many recent CES keynotes demonstrate.
Few organizations are more serious about road safety than Euro NCAP. The publication of its 2025 roadmap “In Pursuit of Vision Zero” in September 2017 showed a clear path towards an “NCAP Trinity” of autonomous emergency braking, lane-keep assistance, and vision-based driver monitoring for mitigation of driver distraction and fatigue. This is where the mass-market is heading and the combination of the three systems working together is what I call driver monitoring and assistance.
As Nvidia figured out last, the near-term future of driving in the mass market is a human/machine collaboration, with the role of machine intelligence to make human drivers into safer drivers. In comparison, AV technology — at least in volumes sufficient to bring about a meaningful reduction in road deaths — is a long-term endeavor. Sully’s experience perfectly highlights the drawbacks of taking humans out of the loop altogether.
No-one has ever trained for an incident like that
Sully: “We’re gonna be in the Hudson”
LaGuardia Departure Control: “I’m sorry say again”
We know the story of Sully landing on the Hudson River in January 2009 — coincidentally the same month Google started working on self-driving. A key factor enabling him to keep control of the aircraft was his situational awareness and instinctive decision to start the auxiliary power unit (APU), thus maintaining supply of electrical power to the “by-wire” systems following failure of both engines.
The time-stamped transcript of events recorded in Sullenberger’s book “Highest Duty” details dual engine loss five seconds after the bird strike and that his decision to then start the APU was made in six seconds. The plane was landed successfully three minutes and 20 seconds later. The lives of 150 passengers and five crew were saved because Sully improvised — he eyeballed it.
Common sense suggests that it is implausible to train a machine for every possible eventuality —as evidenced by Boeing’s MCAS and Tesla’s Autopilot — and every human driver understands the necessity of improvisation when faced with unpredictable and uncertain events. We’ve all eyeballed it.
Jensen is a salesman and of course he will present the achievements of Nvidia and the possibilities of AI and deep learning as positively as possible. Yes, AV technology is coming, but neither as quickly nor universally as the AV tech suppliers would like you to believe – and certainly not in mass-market series-production vehicles anytime soon.
But this is all just my opinion. Don’t listen to me or any other AV techexec. If you are now getting serious about safety, listen to an impartial expert like Missy Cummings (@missy_cummings), or even Sully himself (@Captsully).
— Colin Barden is principal analyst at Semicast Research.