To ensure pedestrian safety from self-driving vehicles, a new study has proposed that installing robotic eyes on autonomous vehicles affects the psychology of walkers, improving their safety.
During a research carried out by the University of Tokyo, participants were placed in a virtual reality (VR) environment where they were given the choice to cross a road in front of a moving vehicle. The result showed the participants could make safer or more efficient decisions when the vehicle was equipped with robotic eyes that either looked at the pedestrian (registering their presence) or away (not registering their presence).
What is the problem ?
The major difference with self-driving vehicles, the researchers say, is that drivers may become more of a passenger. They may not be giving full attention on the road, or there may be nobody at the wheel at all. This makes it tough for pedestrians to ascertain if a vehicle has registered their presence, as there might be no eye contact or indications from the people inside it.
The researchers tried to find out the solution where the pedestrians could realise when an autonomous vehicle has noticed them and is intending to stop. For example, a self-driving golf cart was fitted with two large, remote-controlled robotic eyes. The researchers made a “gazing car,” similar to the character from the ones showed in Pixar movies. They wished to test if installing moving eyes on the cart would affect people’s more risky behaviour in this case, whether people would still cross the road in front of a moving vehicle when in a hurry.
The team set up four scenarios. In two scenarios, the cart had eyes and in two it didn’t. The cart had either noticed the pedestrian and was intending to stop or had not noticed them and would continue. When the cart had eyes, it would either be looking towards the pedestrian (going to stop) or looking away (not going to stop).
Since it would be obviously unsafe to ask people to decide whether to cross in front of a moving vehicle in real life (even though there was a disguised driver for this experiment), the team captured the scenarios using 360-degree video cameras. The 18 participants—nine women and nine men, aged 18 to 49—experienced the experiment in virtual reality. They were given three seconds each time to select whether they would cross the road in front of the cart after going through the situations several times in a random order. The researchers kept track of their judgments and calculated how frequently they crossed when they ought to have waited and stopped when they ought to have crossed.
Project Lecturer Chia-Ming Chang, a part of the research team, said that the surprisingly the findings indicated a clear disparity between genders. With the possibility that other factors like age and background also influenced the participants’ reactions, it is crucial because it demonstrates that different road users may have different behaviours and needs, which call for different communication methods in our future world of self-driving cars.
In this study, the male participants frequently chose to cross the road in a risky situation (i.e., when the car was not stopping), but these mistakes were lessened by the cart’s eye stare. The safe conditions for them (i.e., deciding to cross when the car was about to stop) did not differ significantly, according to Chang. On the other hand, the eye look of the cart helped the female participants make less error-prone decisions (e.g., deciding not to cross when the car was intended to stop). For them, the dangerous circumstances were not that different. In the end, the trial demonstrated that the eyeballs made everyone’s crossing easier or safer.
The researchers say that the small sample size and single scenario being played out by the participants in this study have limitations. Additionally, it may so happen that decisions made in virtual reality differ from those made in the actual world.
However, as the shift from manual to automatic driving is a significant step, people are yet to get used to it. According to Igarashi, the robotic eyes coupled to the self-driving AI will eventually have automatic control instead of manual control, which will allow us to adapt to various scenarios. Researchers hope that this study inspires other to test out related concepts, anything that enables improved interaction between autonomous vehicles and pedestrians, which eventually saves lives.
(With inputs from ANI)