Few technological innovations in recent years have generated more attention (and debate) than the seemingly imminent arrival of self-driving cars.
Known within the industry as connected and automated vehicles (CAVs), these once-futuristic concepts edge ever closer to reality – and when they do become deployed at Level 4 or higher on the Level of Automation spectrum, the anticipated benefits could be breathtaking.
According to the National Highway Safety Administration, CAVs could virtually eliminate the 94 percent of serious crashes that result from human error. With more than 37,000 people dying annually in motor vehicle-related crashes, and accidents generating nearly $600 billion in economic impact, the prospect of safer, more efficient and convenient travel is eye-opening.
But how far are we from driverless CAVs? Will they deliver on the benefits touted? And what role does high-performance computing (HPC) play in the research being conducted in this field?
To learn more about the state of CAV research, and HPC’s impact, I have asked two of the top scientists in supercomputing for CAVs (Pete Beckman) and the design and control of CAVs (Huei Peng) to bring us up to speed … pun intended!
Huei Peng is the Roger L. McCarthy Professor of Mechanical Engineering at the University of Michigan. For 20 years, he has worked on vehicle automation, vehicle dynamics, design and assessment of safety systems and human model development – with a special focus on understanding how they err. He is also director of MCity , the university’s public/private partnership devoted to advancing the development – and deployment — of CAVs.
Peter Beckman , co-director of Northwestern-Argonne Institute for Science and Engineering and co-founder of the International Exascale Software Project (IESP), is a recognized global expert in high-end computing systems. He is the founder and leader of the Waggle project for smart sensors and computing, and that technology and software framework is being used by the Chicago Array of Things “smart cities” project to monitor traffic, air quality and other conditions in the city.
They are truly living the theme for SC19 in Denver: HPC is Now . Michela: Can you each provide a brief overview of your CAV-related research?
Huei: Most of our research is to understand how various aspects of CAV technologies can help address issues like safety, energy consumption and traffic congestion. Our most critical research question involves the challenge of perception. Using a camera, or many cameras, can we understand what is being seen? Is it a bicycle? A pedestrian? How fast is it moving?
Right now, as we look at the sequences of data, the highest accuracy for detection of perception has a qualification accuracy of 85 or 90 percent. That’s good, but it needs to get much better before CAVs can be deployed widely.
Pete: We are involved in many projects that address the future of autonomy and mobility. And the core of our research is to advance the notion of edge computing. For CAVs and other forms of autonomy to work, we’re going to have to connect our data sources directly to our computational centers.
It’s a wonderful and exciting time to be doing this research. Michela: How are you leveraging HPC in your work?
Pete: Everyone would love to have data centers full of supercomputers; you can shift the load around, and add hardware quite simply. Everything is easier to manage.
But for CAVs to meet their potential, I believe the future of supercomputing takes place at the edge. We need a new programming model for supercomputing. Latency is a big issue. If you have a self-driving car, you can’t wait for a picture of a stop sign to be sent miles away, go through a series of data centers, before it arrives back with an answer.
Also, for autonomy to work, we have to deal with bandwidth issues, because new sensors have incredibly high resolution and generate so much data that you have to examine it at the edge. Add in the issues related to resiliency and data privacy, and you have a technology framework that depends on HPC to work.
Huei: Autonomy involves incredibly large quantities of training data to “teach” a perception system to detect road users accurately. In addition, moving vehicles sometimes do not have a good vantage point to detect other road users reliably. Edge computing and cameras residing at critical intersections can be a better solution than on-board automated vehicle solutions.
What today resides in the cloud will, one day, become an edge computing unit to reduce communication delays and latency. Moving forward, HPC is needed to address safety issues requiring some of the processing to occur locally. We just have to find the right tradeoff – between computation and communication. Michela: There’s been a lot of hype around CAVs? How close are we to the reality of the CAV promise?
Huei: Today, Level 1 through Level 3 vehicle functions are available on cars you can buy from almost any car company. And not that long ago, that would have seemed impossible.
Even truly driverless vehicles – Level 4 – exist but they have limited operational design domains (ODDs). Today, they operate inside a geo-fenced area, don’t go faster than 25 mph, and don’t operate well in inclement weather. The goal of many OEMs is to move faster, up to 55 mph; and to drive in nighttime and daytime, on sunny and snowy days.
As the ODDs keep growing, the number of Level 4 vehicles will keep increasing over time. The future is already here; it’s just not as prevalent as it will be some day.
Pete: I think we have two or three things to consider as the field evolves.
One is the technological impediments. Those are challenging, but the amazing capabilities of machine learning are enabling us to tap into advancements that will be transformative for CAVs. In just a handful of years, we’ll start to see autonomous delivery vehicles on closed-company campuses; and then it will start moving out to open delivery routes.
Another consideration is our regulatory system. Right now, it’s been caught flat-footed in dealing with autonomy-related issues. Now, if you’re driving and your car skids on a wet road and you crash into a tree, it’s an acceptable mistake. We’re error-prone humans. But can CAVs make mistakes? What is our level of acceptance there? We need to align our expectations with rules and regulations needed to oversee the industry.
Michela Taufer, PhD, General Chair, SC19
Michela Taufer is the Dongarra Professor in the Min H. Kao Department of Electrical Engineering & Computer Science, Tickle College of Engineering, University of Tennessee, Knoxville.
#HPCisnow , Array of Things , Automated Vehicles , CAVs , General Chair , Huei Peng , IESP , MCity , Michela Taufer , Northwestern-Argonne Institute for Science and Engineering , Peter Beckman , Thought Leaders , Waggle