in , ,

Why high definition maps are key to autonomous driving

Why high definition maps are key to autonomous driving

With relatively constrained, predictable drive paths, trucking presents the nearest-term viable opportunity for autonomous-vehicle, or AV, technology. They are an easier challenge than passenger cars, which need the ability to travel widely and through complex road networks in order to reach economic viability.

Despite all the attention, a key component of autonomous truck technology has been somewhat overlooked—one that’s both time-tested and cutting-edge: maps. High-Definition maps, to be exact. These critical, informationally rich datasets form the navigational foundation for much of a self-driving vehicle’s core driving functions by providing an accurate, detailed representation of the road environment.

Structurally speaking, all HD maps have three primary layers of information: First, the actual 3D representation of the road and its related features and furniture—think stop signs, traffic signals, lane markings, crosswalks, curb heights and so on. All of this is captured with centimetre-level accuracy. Second, an interpretive layer that tells an AV what each such sign, light and marking means. Generally, this is termed the “semantic” layer. And third, a vector layer that outlines the optimal drive paths, essentially providing “virtual rails” for the AV to follow. The quality, format and detail of the data may vary from provider to provider, but this basic architecture remains the same.

Where things get interesting—and where map makers really prove their mettle—is when you start looking at things beyond this common core. Here, two things stand out: the additional information that can be superimposed atop these basic layers; and the frequency at which the map as a whole gets updated. It’s here where we really unlock the power of HD maps—where we go from a “simple” tool to help an AV understand its position to a powerful lens through which an AV can “see” and “anticipate” the world beyond its horizon. In a sense, an HD map becomes a fourth sensor that adds to the traditional complement of LiDAR, radar and cameras that are found on most autonomous vehicles. The infinite sightline of this “fourth eye” has a special import for the world of autonomous trucking.

PATH PLANNING

It’s almost axiomatic that efficient routing is mission critical for the trucking industry. Poor routing wastes time, and time is money. Traffic delays cost the industry upwards of $75 billion and weather-related delays cost companies an additional $2 to $4 billion. Route planning is, thus, one of the core functions ATs must perform. To help find the quickest path from point A to point B, most HD maps incorporate some form of traffic-flow data. Flow, however, only provides part of the picture. To really understand the road ahead, you need an additional layer of information—namely, the underlying traffic events that cause changes in traffic flow. Here, HD maps have historically fallen short. At best, the big players have Waze-like data—basic incident feeds that lack both the accuracy and detail necessary to support autonomous path planning.

This data deficit is especially pronounced in light of the size and complexity of something like a Class 8 truck. Things like lane width, signal distances, turn angles, auxiliary lane presence, and height/weight/cargo restrictions can have a massive impact on a truck’s ability to operate safely. Only by understanding the root event—and the event’s impact on each of these variables—can an autonomous truck assess the driveability of a given section of road. To illustrate: If an autonomous truck only knows there’s a traffic slowdown (i.e., traffic flow), it may continue down its planned route if no faster alternatives are available. Even if the truck knows that a grizzly accident is causing the slowdown (i.e., the traffic event), it will likely stay the course. Time is money, right? Only if the autonomous truck knows the full picture—that this grizzly accident narrows the road such that a required turn is no longer possible (i.e., the traffic event and impact)—does it realize it must opt for the alternative. It’s important, therefore, that autonomous trucking companies and their clients, push their map providers to include this kind of event data or find providers that do.

MOTION PLANNING

Perhaps worse than delay is the prospect of a truck becoming completely stuck in transit. While on a much larger scale, the recent woes at the Suez Canal is something of a cautionary tale for autonomous trucking—the moral being that navigating tight, complex spaces can be very, very tricky. And this risk is very real—we’ve recently seen one of the leading AV manufacturers get flummoxed by basic road work. Now imagine if this minivan were an 18-wheeler.

Motion planning—that is, the sequence of actions needed to navigate through a specific area and/or obstacle—is, thus, of paramount importance. We’re talking about the core-driving function here. This too takes on added complexity when dealing with vehicles the size of a Class 8 truck, where things like a truck’s large stopping distance and wide turn radius add unique technical challenges. For example, a fully loaded semi traveling at 100 kilometres an hour has a stopping distance of around 160 meters; and this number dramatically increases under suboptimal road or environmental conditions. Contrast that to the 90 meters it takes a typical passenger to come to a full stop from the same starting speed.

Just in everyday driving, therefore, autonomous trucks need to “see” and “anticipate” further than their consumer counterparts. Standard external sensors, however, may not provide a sufficient sight horizon. A good LiDAR sensor may have a sensing range of 250 to 300 meters—perfectly acceptable for a passenger vehicle but likely too short-sighted for a large truck. HD cameras and computer vision help; but even that may not properly prepare an autonomous truck for things like construction or particularly complex road configurations or when poor weather limits sensor visibility.

Road work presents special difficulties for large vehicles, as any truck driver can likely tell you. Shifting lanes, sudden merging and speed alterations can create dangerous, difficult motion-planning scenarios, especially for large vehicles. Indeed today, large trucks are involved in nearly one-third of fatal work-zone crashes, despite making up only around five percent of vehicle traffic. Proper anticipation is one of the keys to safely navigating such situations, and this is where maps come in. While cameras—typically, an autonomous vehicle’s longest-ranging sensor—top out at around 1,000 meters of sensing range, a properly updated HD map with a layer of data for construction events has a virtually infinite sightline. Of course, these construction events must be captured with high spatial accuracy and rich impact data; but if they are, an autonomous truck can put itself in the optimal position to safely and successfully navigate the work zone, well in advance of seeing it—let alone entering it.

Both the path-planning and motion-planning use cases assume that the map is up to date. Knowing about yesterday’s construction event is no help in navigating today’s road. But keeping a map up to date is no easy task, especially with the kind of detailed impact analysis we’re talking about here. The most common method of updating maps—namely, aggregating small packets of data from consumer vehicles—simply cannot capture such nuanced, granular data. (It’s good for other things, like determining the position of certain road features.) It’s important, therefore, that autonomous truck manufacturers—and their customers—really kick the tyres of their map providers and ensure that they are using update modalities that can pick up this sort of detail—things like direct data capture from partner networks.

A FINAL THOUGHT

This sort of map data has implications for human drivers as well. More efficient route planning and more powerful driver assistance tools are concrete benefits that good HD map data can deliver to drivers and operators today. It’s a good reminder of how cross-collaboration between the world of tomorrow’s autonomous technology and today’s operational software stack can yield many positive results. As always, if you want to get to where you want to go, follow the map.

Author: Ethan Sorrelgreen, chief product officer at road-intelligence company CARMERA.

Report

What do you think?

486 Points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GIPHY App Key not set. Please check settings

Accelerating road condition forecasting to know what’s ahead

Accelerating road condition forecasting to know what’s ahead

(video) Innoviz – Leaving The City