Waymo has been at the cutting edge of autonomous vehicle development for a while now. It’s been operating a publicly available driverless shuttle service in Arizona for a couple of years, and it expanded its testing programs all over, including to San Francisco.
If you’ve never driven in San Francisco, it’s not a fun place to try and get around in a car. The streets are often narrow, the intersections can be weird, pedestrians are everywhere and all the elevation changes can make visibility a nightmare. In short, it’s a great place to test the efficacy of a self-driving AI like Waymo Driver. Now, in a blog post published on Thursday, we’re getting a look at just what the Waymo Driver sees as it moves around the city.
A big reason that Waymo has been so successful with its testing programs is just the sheer amount of miles it’s driven (20 million on real roads!), both in the real world and in simulation. This has led to a really well-trained AI with a deep well of situations it can draw on to make split-second decisions in a place like San Francisco without relying on human intervention.
The really neat thing is that the vehicle view is not that wildly different from the camera view. It takes the data from cameras, lidar and radar and turns them into something that allows us to recognize individual vehicles and pedestrians. This is a lot different from the vague-looking point clouds that we’ve seen in the past.
It’s still likely that we’re a long way off from widely commercially available Level 4 or 5 autonomy. Still, the progress that Waymo has made in recent years to arrive at the point where it can confidently send out test vehicles in San Francisco is pretty staggering. We’re looking forward to seeing what comes next from Alphabet’s autonomous vehicle firm.