Relative to human motorists, the driverless vehicles now undergoing testing on public roads are overly cautious, maddeningly slow, and prone to abrupt halts or bizarre paralysis caused by bikers, joggers, crosswalks or anything else that doesn’t fit within the neat confines of binary robot brains.
Self-driving companies are well aware of the problem, but there’s not much they can do at this point. Tweaking the algorithms to produce a smoother ride would compromise safety, undercutting one of the most-often heralded justifications for the technology.
It was just this kind of tuning to minimize excessive braking that led to a fatal crash involving an Uber Technologies Inc. autonomous vehicle in March, according to federal investigators. The company has yet to resume public testing of self-driving cars since shutting down operations in Arizona following the crash.
If driverless cars can’t be safely programmed to mimic risk-taking human drivers, perhaps they can be taught to better understand the way humans act. That’s the goal of Perceptive Automata, a Boston-based startup applying research techniques from neuroscience and psychology to give automated vehicles more human-like intuition on the road: Can software be taught to anticipate human behaviour?
“We think about what that other person is doing or has the intent to do,” said Ann Cheng, a senior investment manager at Hyundai Cradle, the South Korean automaker’s venture arm and one of the investors that just helped Perceptive Automata raise $16 million. Toyota Motor Corp. is also backing the two-year-old startup founded by researchers and professors at Harvard University and Massachusetts Institute of Technology.
“We see a lot of AI companies working on more classical problems, […]