In baking hot San Jose, California, two men are lurking in the bushes at the edge of a deserted car park, waiting for a self-driving car.
They aren’t normal pedestrians, or autonomy superfans. They are employees of the start-up Voyage – and they are going to test the car’s ability to stop for people in its path.
This testing ground at the southern edge of Silicon Valley is used by the company, run by 30-year-old British entrepreneur Oliver Cameron, to make sure the cars behave as expected in different scenarios before they move to retirement villages where they run as transport for elderly residents.
Voyage believes retirement villages are the perfect place for the initial roll out of self-driving cars. There are no children, few residents drive, and those that do are usually under the speed limit.
Jared Aguayo and Lucky Lui are operations specialists at Voyage. They don’t just hide in bushes and walk out in front of driverless cars for a living, they’re part of a team setting up this human obstacle course to make sure the vehicles are safe.
“It’s a real, genuine skill to be able to stress test these vehicles to the max, because you get a feel for the cars, and you get a feel for when you’re close to the edge,” says Cameron.
Safety is an obsession for the self-driving industry, following the death of a pedestrian who was hit by an Uber car in Arizona last year. A similar accident, says Cameron, would end his company, which was founded in 2017 and has raised more than US$20m in funds from venture capitalists.
Sitting in the back of one of the cars, nicknamed Maggie (all of them are named after characters from The Simpsons), we set off on our obstacle course with no one behind the wheel.
Eric Gonzalez, the director of engineering, sits in the front alongside a computer that shows the location of potential hazards nearby as they are detected by the car’s Lidar, radar and cameras.
Our car patiently waits to allow a golf cart, driven by two employees, to speed by, before stopping at an invisible crossing to allow them to walk across its path.
We round a corner to see a human-sized object wobble across the road. It looks like a giant, black bin on wheels, or a rubber Dalek, but it’s actually a tackling robot, created for American football players to practise their skills, being remote controlled by Jesse Clifton, the dispatch lead, from a nearby pavement. Everybody is getting into the autonomous thing: General Motors and Honda have teamed up in GM’s Cruise autonomous offshoot. Other companies, such as Uber, which has its own fake “city” that it uses as a testing ground, use bespoke dummies to test their systems, but these cost “literally millions of dollars,” says Cameron.
As a stand-in for a human, the tackling dummy lets them push the cars further, narrowing the time they have to detect an obstacle and stop without putting any workers in harm’s way. Voyage has a three-stage testing process, which begins with testing updates to the self-driving algorithm in a computer simulation.
Then, the team moves on to this car park obstacle course, to make sure the cars behave the same way in the real world, and after that, the computer system that drives the cars in the company’s two locations, retirement villages in Florida and nearby in San Jose, is updated with the changes.
While the cars in this car park operate completely without drivers, in the real world someone is behind the wheel, poised to take over in case something goes wrong, as well as an operator in the front passenger seat noting down any unexpected events.
If anything happens that they haven’t allowed for, the algorithm is tweaked and put back into simulation, and the process begins all over again.
Self-driving cars have historically been stymied by two particular problems. One is unprotected left turns (or right turns, in NZ), where a car must turn into a flow of oncoming traffic with no right of way. The robot drivers have struggled to be assertive enough to find a gap. Voyage is made up of alumni from Tesla, Waymo (Google’s spin-off self-driving car project), Uber, and Cruise, among others. Another is double-parking, where a car is stopped in the middle of the road and following cars must pull into the oncoming traffic lane to get around it. Navigating that one requires briefly breaking the rules, something cautious, safety-conscious autonomous cars struggle to do.
Are self-driving cars always going to be too polite to efficiently navigate our messy, human cities? Cameron thinks that bad driving will instead become an “antiquity”, with humans behaving more like robots.
“It’ll just be that you drive like a self-driving car, which obeys rules. And it’s actually incredibly efficient, because it knows exactly the right path to get you there fastest.”
Halifax-born Cameron, a self-described “Apple fanboy” as a teenager who came to Silicon Valley at the age of 21, has a team made up of alumni from Tesla, Waymo (Google’s spin-off self-driving car project), Uber, and Cruise (General Motors’ autonomy arm), among others.
Drew Gray, the company’s chief technology officer, was previously Uber’s director of engineering and, some years before that, the first hire for Tesla’s Autopilot team, where he became a senior engineering manager.
He says self-driving car companies are shifting away from simply trying to collect as many miles as possible, a strategy that was popular in the early days of the industry.
What, then, does he make of his former boss Elon Musk’s claim that Tesla is light years ahead of everyone else because of the millions of miles his customers cover by using Autopilot in their cars every day?
“Data has never been a bottleneck,” he says. “Talking about having more data in no way equates to making it better. And in a world of infinite edge cases, seeing a million out of infinity or 10 million out of infinity is about the same thing.
“So if you’re building software that handles edge cases, one by one like that, then you have an infinite tail to handle. So a lot of these points don’t make a lot of sense to me. I don’t know why he says them.”
And how do the employees feel about being human guinea pigs for the system? Do they ever feel nervous about crossing in front of one of the cars?
“When I first started,” replied Aguayo. “But you get used to it after a while. You just know how the vehicle reacts.”
Increasingly they are more concerned about the flaws with human drivers, like people looking at their phones instead of the road. Voyage’s chief technology officer dismisses Elon Musk’s claim that Tesla are well ahead, saying that “Data has never been a bottlekneck”. Cameron says many of their elderly customers feel similarly, and are nervous about driving in the evenings because of the fallibility of their fellow human drivers. “I’m very much more of a defensive driver now, and more reserved than when I was younger,” says Clifton. “And I feel like part of that is working in this industry.”
The next step for all self-driving cars is to learn context. The tiny judgments that humans make every day (someone standing reading at a bus stop is unlikely to cross the road, someone standing on the edge of a pavement and looking right and left is very likely to), are yet to be mastered by computers.
“Slowly, these changes are happening,” says Gray. “[Perhaps] they did glance at the car, and you know that they’re aware of you.
“So if they’re not moving, they’re likely not going to move. Or they haven’t glanced at the car yet. Maybe they’re going to move because they don’t even know we’re here. That’s the kind of the bleeding edge stuff right now.”
Even without trying, they’ve had plenty of weird and wonderful things to test them. Mountain lions have been spotted at the California testing base. Wild turkeys have also been seen in the San Jose retirement village.
How does the car react if its choice is between hitting a human or a mountain lion? Or two different types of humans (the age-old philosophical “trolley problem”)?
Unlike human drivers, the cars don’t become distracted, and they currently travel at a maximum speed of 25 miles per hour, and so Cameron argues that they wouldn’t ever be faced with such an impossible choice. Instead, the car would simply stop.
Besides, the computer doesn’t know the difference. “I don’t believe anyone has that level of prioritisation,” he says. “Machines aren’t that intelligent yet. The counter, I would say, is just to ask ourselves, how often in the history of humanity, have we ever been faced with that sort of perilous decision – there’s a grandma here, and a mother with a baby here?”
They are testing for some interesting likelihoods, though. Later that day staff plan to heat part of the computer up to 89C, to see how it copes with the very upper end of its temperature capabilities.
The next day, Cameron emails to say it handled the eye-watering heat “gracefully”.
The Telegraph, London