in , ,

Teaching Self-Driving Cars to Watch for Unpredictable Humans

Teaching Self-Driving Cars to Watch for Unpredictable Humans
Photograph: David McGlynn/Getty Images

If you happen to live in one of the cities where companies are testing self-driving cars, you’ve probably noticed that your new robot overlords can be occasional nervous drivers. In Arizona, where SUVs operated by Waymo are sometimes ferrying passengers without anyone behind the steering wheel, drivers have complained about the robot cars’ too-timid left turns and slow merges on the highway. Data compiled by the state of California suggests that the most common self-driving fender-benders are rear-end crashes, in part because human drivers don’t expect autonomous cars to follow road rules and come to complete, non-rolling stops at stop signs.

As for human drivers, some are nervous and scrupulous, others are definitely not. In fact, it’s even more complex: Some drivers are careful in some moments, and hard-charging in others. Think: Casual Sunday drive to the grocery store versus racing to get the kid before the daycare late fees kick in. Robot cars might be smoother, and might make better decisions, if they knew exactly what sort of humans they were driving around.

Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory and Delft University’s Cognitive Robotics lab now say they’ve figured out how to teach self-driving vehicles just that. In a recent paper published in the Proceedings of the National Academy of Sciences, they describe a technique that translates sociology and psychology into a mathematical formula that can be used to teach self-driving software how to tell the road ragers from the rule followers. Vehicles equipped with their technique can differentiate between the two in just about two seconds, the researchers say, and use the info to help the autonomous vehicles decide how to proceed on the road. The technique improves self-driving vehicles’ predictions about human drivers’ decisions, and therefore, the vehicles’ on-road performance, by 25 percent, as measured by a test involving merging in a computer simulation.

The idea, the researchers say, is not just to create a system that can differentiate “egoistic” drivers from “prosocial” drivers—that is, the selfish ones from generous ones. The scientists hope to make it easier for robots to adapt to human behavior, and not the other way around.

Courtesy of MIT CSAIL

Source: www.wired.com

What do you think?

486 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Hyundai focuses on EVs, mobility services in $52B investment plan

Hyundai focuses on EVs, mobility services in $52B investment plan

4 exciting ways IoT impacts automobiles

4 exciting ways IoT impacts automobiles