Elon Musk Declares Precision Maps A “Really Bad Idea” – Here’s Why Others Disagree

Elon Musk Declares Precision Maps A "Really Bad Idea" -- Here's Why Others Disagree

HD Map sample from Navteq/Here shows detailed image and structure of everything on the road. If it’s still there, you don’t have to figure out what it is. HERE

Last week, I examined Tesla’s plan to build full self driving without LIDAR when all other major teams are betting on that technology. This is not Tesla’s only contrarian bet. They have also decided not to use high-detail maps in their project, and again most other teams plan otherwise. We briefly barked up the tree of high precision lane line [maps], but decided it wasn’t a good idea. — Elon Musk There are many sorts of maps a robocar might use. The most basic maps are the sort found in navigation systems and your phone. They just show where the roads are and how they connect. Today, most such maps, including Tesla’s, map each individual lane, so they know at each point how many lanes there are, how they connect, and one function each lane has.

Some maps go much further. They will record not just how many lanes there are but precisely how they are shaped. At a level above that, the maps may contain position, shape and and meaning information for some or all of the things you might see on the road — parking spaces, driveways, guardrails, road signs, traffic signals, crosswalks and anything else that might affect traffic, road rules and where a car might stop or pull over.

More detailed maps, sometimes called “HD” maps will contain things like images of the road surface (often taken in infrared by the LIDAR) and surroundings, including the location of trees, hydrants, mailboxes or other physical objects in the environment. These objects are tracked not just to understand them, but to assist in the first robocar task, known as localization, namely finding out exactly where you are on the map. Exactly, as in within a few centimeters. While GPS is one of the tools that helps with that, it’s much too unreliable for that level of accuracy and precision.

If you don’t have a detailed map, then it is meaningless to localize on it, but you still need to figure out where you are in terms of what lane you are in, and where you are (to a less precise degree) on the lower-detail map.

Most teams use a map not just to localize, but to help understand the world. They can understand what things are because the mapping process — a combination of both machine and human effort — established earlier where they are and what they mean.

Map information can be used for many things. At the most extreme level, you can drive the lanes in the map rather than what you see on the road. This can fail if the lanes have changed since the map was made, so generally, nobody wants to rely that much on the map. You can know where fixed obstacles are even if you can’t always see them with other sensors.

One common use of maps is to know where to expect things, and thus to understand them. In particular, a map can be made of all the traffic lights at an intersection, what they mean, and precisely where in space they are hanging. A car that knows where it is on the map can know exactly where to expect the traffic light. This can make recognizing and decoding the light much easier, and can almost eliminate the risk of being fooled by a false light. For most cars, the strategy on lights is simple, “if you don’t see a green light, don’t go.”

Most of all, maps help the car understand the world around them. When they see something that was seen before during mapping, there is a chance that the map builder can understand what it is, and remember that. In particular, it can come to that understanding much better than a car can do when driving down a road with no memory. AI tools can process the image with as much CPU time as needed, on big servers and access to the world’s data. A car must figure it all out with more limited processors and in real time.

Often you will gather data more than once in mapping. Indeed, every street can be “re-mapped” every time a new car drives over it. Each drive is from a different lane or direction, and it can be very helpful in understanding what something is to see it at different times from different angles.

Human beings can review decisions about what things mean, particularly if the software has any uncertainty. Humans can also review software decisions to assure they are correct.

After a map is produced, the next car to drive that road can check that the map is indeed right, either with humans in the car, or in an automatic fashion.

It’s the difference between driving a road you’ve never seen before, and one you’ve driven 100 times. Humans can drive roads they have not seen before, but they are better when seeing things they remember.

Sometimes people mistakenly call maps “another sensor.” This is a bad habit. They can tell you about things you can’t sense, but they are not a sensor. Rather, they primarily help you understand what you are seeing. They combine with what might be termed “instant analysis” to create an understanding of the world one must drive in. They can give you the probable state of things out of your vision, but should not be viewed as sensing them. Driving without a map is making a map

In most cars, the effort to drive without a map is effectively the effort to create a (simple) map while you drive. The car must figure out where the lanes are, and place and understand everything relevant in the environment, then plot a path through it. The no-map approach involves forgetting what was learned before and doing it all again. Once you can drive with high safety without a map, making and updating maps becomes an automatic process which is much less expensive. High precision maps and lanes are a really bad idea … any change and it can’t adapt.

Elon Musk Of course, roads do change. Lanes get repainted, construction zones arise, potholes appear and much more. There is probably some road changing every day in a big city. At the same time, most individual road segments see change very rarely. As such, even a system that, as Musk describes, can’t adapt to change can do fairly well, and this is why he views it as another type of crutch. As with LIDAR, Musk feels that you need an instant analysis system so good that it gains little from maps, and that depending on maps slows down your development of that necessary extremely good system.

But teams realize they must be able to handle roads that change from their map. It is actually true that you can do a great deal with a pretty minimal system. Fortunately, if your map is detailed, it is immediately apparent, thanks to the perfect memory of computers, that the road has changed. In addition, most (though not all) road changes are planned in advance and published in a database where companies can put them into their maps, so they are not a surprise. That will improve with time, and surprise construction will become very rare, though not entirely unknown.

When your map matches the road, the map is very useful. Most have no doubt that the ability to have the superior, multi-viewpoint, arbitrary-CPU, human-reviewed and tested map information makes you safer. You have superior perception and understanding of all the things you see that match your map. There will always be things not on your map (like cars and other moving objects) as well as places where the static objects changed, and you must deal with those.

We might imagine the following situations with slightly different levels of safety

Driving on a road where the map is fully correct: Safety level X+

Driving on a road where the map is noticed to be wrong: Safety level X1

Driving with no map in a system that does not use maps (Tesla): Safety level X2

Driving on a road where the map is wrong and you fail to notice that: Safety level X—

To understand Tesla’s bet, one must examine the difference between X1 and X2, and the frequency of situation 4 with X—. In theory, X1 and X2 are the same. If you can make a car that drives without a map, you should also be able to make a car that drives just as well when the road has changed, because at a minimum you can just switch into “drive with no map” mode.

I suspect Tesla believes that’s not true, because a team that relies on the X+ level it gets from a map might not work as hard on its no-map driving, and thus not do quite as good a job. That’s possible, and depends on the decisions of that team.

Tesla plans to have their cars operate in situation 3 all the time. This does not mean the same safety level on all roads. There will be road types that any system handles better than others. With maps, however, you always handle roads of type 1 at the top level. Because of human oversight of the maps, you will very rarely fail to understand them as well as humans do. Humans make mistakes, but with money you can have multiple eyes on the map and get a very good quality level. In particular, you will deliberately put more human oversight on the “hard” roads. X+ should be a fair bit better than X1/X2, but let’s presume for now that it’s only a little bit better.

The reality is, the vast, vast majority of driving will be situation 1. As such, even if X+ is only a little better than X2, and imagine that X1 (road changed) 1 is for some reason significantly worse than X2 (Tesla no map approach) the fact that you do so much more of situation 1 than situation 2 means the overall safety level is still higher.

This leaves the big question — how often does situation 4 occur? If it’s frequent, bad things could definitely happen. It should be very, very rare if the maps are detailed and the software is anywhere as good as used in situation 3. The map can, and often is, like a sort of distilled photograph of the fixed objects of the world. You can compare what you see with the photograph and it’s immediately obvious that it’s different. If your map shows the lane markers going straight, but your image of the world shows them bending right, you know immediately that there is a change. To fool this test and get into situation 4, the new world would have to look identical to what was mapped to the lasers and cameras, and yet at the same time mean something completely different.

As noted in my earlier article on Tesla fatal accidents , detailed maps could have allowed Tesla to avoid the fatalities. For Walter Huang, maps would have revealed both the shape of the off-ramp, so that the vehicle did not mistakenly think the ramp “gore” was a new lane, and they would have also indicated the presence of the crash barrier, marking that area as clearly a no-drive. In the two fatal accidents involving going under the broad side of transport trucks crossing the road, maps would have verified that no large stationary radar targets were present over the road in these locations, and as such, that the large radar return from these trucks indicated a probable obstacle. Tesla hopes to make their vision systems so good that they can figure all this out without maps. Do roads change all the time?

We may imagine that roads change all the time, because we encounter construction zones and restripings every day. They are common, but what is quite rare is […]

What do you think?

486 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Drunk Tesla Driver Relies on Autopilot, Gets Busted by Police

Drunk Tesla Driver Relies on Autopilot, Gets Busted by Police

USPS to Test Autonomous Trucks for Shipping Mail

USPS to Test Autonomous Trucks for Shipping Mail