There is a lot of chatter right now about self-driving vehicles, which is understandable, as we stand on the threshold of an entire new world of driving. It’s not just the dream of self-driven vehicles, but also of green cars and trucks that glide silently without emitting any climate- threatening pollutants.
The cutting edge electric vehicle (EV) and autonomous vehicle (AV) technologies are virtually symbiotic, with a great deal of collaboration between the two emerging sectors. Yet, EVs actually are way ahead of autonomous driving in terms of fulfilling their vision in the real world. And it’s not just because people are gun-shy when it comes to tooling around in a driverless car. An empty driver’s seat is perhaps all fine and well on a straight road with almost no traffic and no sudden surprises. But when it comes to real-life traffic, there’s still no substitute for a hands-on driver. A human behind the wheel can see and properly respond to unexpected problems; autonomous technology has a long way to go before it can match wits with a human in such situations.
For the time being, self-driving technology falls short of the vision. Among the stumbling blocks are imperfect sensors, lack of mathematical models that can successfully predict the behavior of every road user, the high cost of collecting enough data to train algorithms, and, of course, passenger inhibition.(When elevators were introduced in the 1850s, it took years for most people to overcome their fear of the new contraptions.)
Today’s self-driving vehicles operate on a complex marriage of camera, lidar, radar, GPS and direction sensors. Combined, these are expected to deliver an all-situation, all-weather answer empowering an AV to see all, anticipate everything, and guarantee safe delivery to one’s destination.
Only they don’t. And they can’t.
Poor visibility, temporary detours, and lane-changes are challenges the current technology cannot process to provide an all-weather, all-situation solution. On-board computers, algorithms, and sensors are just not good enough today to deliver instant, and possibly life-saving, reactions.
While AVs have learned to recognize a fixed traffic light and a moving pedestrian, they cannot yet adapt to all new situations, especially at high speeds and in complex urban environments.
It’s no wonder that government regulators are reluctant to clear AVs for prime time. There is just not enough data to insure confidence in an AV’s ability to prevent damage, injury, and death. Understandably, consumer confidence is also just not there yet.
These are just some of the reasons why teleoperation is necessary to make the era of AVs practical. Simply put, teleoperation is the technology that allows one to remotely monitor an AV, take control as needed, and solve problems quickly and remotely.
With teleoperation, a single controller positioned at a distance from, say, a fleet of robo-taxis can observe each vehicle in real time and override their autonomy as necessary or provide much-needed input. When the problem is solved, the AV continues on its autonomous way. In effect, the remote “driver” takes over or issues commands only when human intervention is needed. And he or she can monitor and handle a number of vehicles simultaneously. The teleoperator is also there to speak with passengers who might be concerned about why the AV is taking a few extra seconds to cross an intersection.
Problem solved? Not so fast.
Because not all teleoperation technology is created equal. Guiding a vehicle remotely requires an ability to transfer, with as close to no delay as possible, information between the vehicle and the teleoperation center. Continuous and reliable two-way data streaming, regardless of changing network conditions, is absolutely critical, and a big challenge of its own. And yet, neither 4G LTE nor WiFi are equipped to support such high bandwidth and low latency communication, especially from a vehicle in motion. It will be a while before 5G will be the universal standard, and even with that network upgrade the challenges will remain.
Another obstacle is one that’s measured in milliseconds. Even with the strongest data connection, there is still a split-second difference between what is happening on the road and what the teleoperator can see. This drastically affects their ability to react.
The human factor (or UX, for user experience) is also an issue. The way different teleoperators perceive driving environments varies, and is different from that of an in-vehicle operator. It is insufficient to merely receive a video feed and follow through with commands. Tools that help create situational awareness are necessary — for example, an overlay to employ sensor systems and translate their information into recommended decisions.
The final bogeyman is hackers, the devils who delight in throwing in a virtual spanner that can ruin one’s day. When a bank account is hacked, money is lost. When a teleoperation post is hacked, lives can be lost, as safety and ingenuity are compromised.
Clearly, all of these issues require solutions that depend on deep innovation in multiple technologies. Details on the teleoperation solution will be explored in succeeding articles. Stay tuned.