February 20 : The A.I system powering driver-less cars are highly trained in virtual simulations to prepare the vehicle for almost every road event. But sometimes the car makes an unexpected error in the real world as it still may not be prepared to engage in real-world situations.
A novel model developed by MIT and Microsoft researchers identified instances in which autonomous system’s training programs don’t match real-world situations and so have come up with a model to improve the safety of artificial intelligence systems like driverless cars and autonomous robots.
Say, a driver-less car that lacks the training and the required sensors to differentiate between large white cars and an ambulance flashing its siren, may not know whether it should slow down, pull over as it does not perceive the ambulance as different from a big white car.
The researcher’s approach puts an A.I system through simulation training, however this time a human closely monitors the system’s actions as it acts in the real world, providing feedback when the system made or was about to make any mistakes. The researchers then combine the training data with the human feedback data and use machine learning techniques to create a model that pinpoints situations where the system will be needing more information on how to act correctly.
Alternatively, the human can provide corrections by monitoring the system as it acts in the real world. This could be done with the human taking control of the autonomous car each time the system’s actions are wrong. This could send a signal that the system was acting unacceptably in that particular situation.
Once the feedback from the human is compiled, the system has a list of situations and for each situation, multiple labels saying its […]