How and where vehicle data gets processed continues to evolve.
Carmakers are modifying their data processing strategies to include more processing at or near the source of data, reducing the amount of data that needs to be moved around within a vehicle to both improve response time and free up compute resources.
These moves are a world away from the initial idea that terabytes of streaming data would be processed in the cloud and sent back to the vehicle. But they also are a recognition that even within a vehicle there needs to be a much more detailed strategy about what to process centrally versus locally, and what impact those partitioning decisions can have on reaction time, power efficiency, reliability, security and safety.
“There’s a reason why people have called the next generation of cars ‘smartphones on wheels,’” said Burkhard Huhnke, vice president of automotive at Synopsys. “The process we’ve experienced in smartphones has been developing an infrastructure in between the back end, the data center, and the smartphone device itself. There’s a data exchange between the data center and the smartphones to ensure that connectivity is there and the data flow in both directions is working properly. There’s an interface that allows you to add apps. But underneath there’s also a layer which can’t be accessed, although sometimes this has been used by people to hack into it. This requires an update over the air during the night or when necessary. This description of a smartphone on wheels can be applied to the next generation of cars — with the difference that it can kill people.”
As a result, a higher level of functional safety and reliability is required, along with multiple security layers. “The conversation in a fleet of cars and a datacenter is the same because you can use all AI machines in the datacenter to share updates with your fleet of cars, and to the cars over a lifetime,” Huhnke said. “That’s the same thing smartphones have done. And just as Apple or Google do with their phones, this requires an architecture.”
Across the spectrum of OEMs, data types tend to be common from various sources including radar, LiDAR and other sensors. But where and how the data is processed, what happens to it once its job is done, and how to keep that data safe are still evolving.
“When we think about the data that’s being processed, the lion’s share of it — +90% — is really data that’s generated by sensors. It’s processed locally, then dropped on the ground,” said Jack Weast, senior principal engineer and chief systems architect for the Autonomous Driving Solutions Division at Mobileye. “That’s important for privacy reasons. There are cameras in these cars, but these cameras aren’t recording all of us. If we’re standing on the street corner, yes, of course, we’re looking for pedestrians so we can do an emergency brake procedure if needed, but there’s no images or video or anything like that going up to the cloud. We’re doing local processing of that data to recognize other objects, whether they be pedestrians or cars or curbs or road markings or street signs or manhole covers or pot holes; everything that you can perceive in an environment. That’s most of what’s being processed.”
Within this, there are different approaches. Often, different kinds of sensors will produce different volumes of data. Just like digital cameras from 10 years ago generated small file sizes, compared to the many-megabyte photo a smartphone takes today, as the fidelity of the camera increased so the file size.
“It’s not always necessarily uniform as you go around the automated vehicle,” said Weast. “You might have different fidelities or strengths of sensors in different locations. Parking camera assists are looking at a very short range, looking down and around the car for parallel parking assist kinds of things. For a really long range camera, where you’re trying to see objects really far way, you want really high resolution because you don’t want one pixel to represent that car. You want lots of pixels to represent that car so you can see it. And so the different fidelity of the sensors also contributes to kind of different amounts of data that’s being processed. Then, you think about the various metrics that exist that people throw around, like TOPS, which is an AI performance metric. That’s difficult because it’s a combination of what the hardware can do, but also the algorithm that’s implemented on top. If you have a much more efficient algorithm on top, you actually don’t need as many TOPS on the bottom and you’re processing less, but delivering the same use case.”
That kind of balancing act is becoming more common across the automotive design world. “We’ve been working with a Tier 1 company on ADAS and a radar chip,” said Kurt Shuler, vice president of marketing at Arteris IP. “The radar chip is doing a lot of local processing. So you pick a target and classification, and most of that processing is done right there at the radar. Then you send objects to a central brain. But that radar is very complex from a digital logic standpoint.”
That solves several key issues, Shuler said. First, latency is reduced. Second, there is less contention for resources such as memory, because the radar has its own memory. And third, the speed of communication throughout the car is improving. The same is happening with cameras.
“In the past, when you talked about ADAS, it was all about cameras and processing of multiple streams. Now, the cameras are doing their own classification, and the centralized brain is able to take in other data. Sensor fusion is finally happening in cars.”
This is a big step forward, because moving large quantities of data around a car is a big challenge. “If you look at the rearview camera that we have been using for the bumper camera, we’ve been using it for a long time,” said Pulin Desai, group director for product marketing, management and business development for Cadence’s Tensilica Vision and AI DSP IP. “It’s very simple. The camera is displaying images on a screen. It’s not doing any processing. It’s just displaying it. So what you’re doing from the rear camera, all the way from the rear bumper all the way to the display, you’re bringing the whole thing. You’re bringing it at whatever resolution and whatever frame rate, and that means that if you are connecting a display to the bumper then you’re bringing that much data. If you start adding some intelligence, then you probably don’t need to bring it all forward. This goes back to discussions people have been having about whether there is a central place where everything is processed. You’re making some decisions on the edge of the thing, and whether it’s the bumper or on the side view or the front camera, you send only certain things, and then a user somewhere that makes a decision. We see both of those approaches used — bringing a lot data in the central ECU, but also doing a lot of processing of the edge.”
Still, these are relatively new ideas for the auto industry. To put this in perspective, current model year cars were designed about eight years ago, noted Ron DiGiuseppe, automotive IP segment manager at Synopsys. A lot of these ADAS functions, such as automatic emergency braking, are implemented in the distributed ECUs.
“It’s common to have an AEB camera sensor located in the windshield, or the rearview mirror, and that’s where the ADAS processor MCU would be co-located in that distributed ECU,” DiGiuseppe said. “Some cars available today have integrated domain controllers where that distributed function is co-located with other ADAS functions in a centralized ADAS compute processing module. When you centralize the remote processing, you still have the sensors on the periphery, like on the windshield or in the bumper. You might have radar sensors or image sensors, but the trend is for centralized processing of the data.”
That can include a centralized ADAS module that includes multiple applications, such as automatic emergency braking and lane keeping, where the camera sensors keep the car centered in the lane.
“This trend to centralize all the compute processing of that ADAS data into a centralized module has some big impacts on where the data is processed and how much of the data is processed,” DiGiuseppe said. “You can imagine the in-vehicle network, the gateway that data has to transfer from the remote sensor across that central gateway to that central ADAS processing compute module. So the bandwidth of the gateway is increasing to handle that remote data transfer. And with multiple applications on that same processing module, it’s like a hypervisor processing those applications in parallel. The amount of data and the processing compute power is increasing significantly, and that’s a function of the architecture. As a result, the architecture is changing from those distributed ECUs to a centralized processing module and, therefore, the amount of data, the location of where the data is processed and the processing power is growing along with that architectural change. That’s a big impact on how and where the data is being processed.”
Another aspect to this is the mix of sensors — cameras, radar, LiDAR. There is now talk about combining all of those with sensor fusion “Then you have a world model – a virtual representation of the world – and you can make decisions and drive accordingly,” said Weast. “The problem with that is you’re reliant on all three of those sensor types, all working perfectly, all at the same time, for your sensor fusion process to work. If you have trouble — environmental conditions or maybe a sensor failure, or you got mud on your windshield — then your sensor fusion doesn’t work. Then what do you do? Your car may not be able to operate safely, even if it’s just a short-term transient kind of error.”
For this reason, Weast said Mobileye is developing a self-driving car that can operate with cameras only, and another one that can drive with radar and LiDAR only. He stressed they won’t deploy either on its own, but will combine both of the software stacks and sensors into one car, in order to achieve system-level redundancy. “It may mean more data, but that goes back to the algorithmic efficiency of your hardware and your software and your ability to mitigate against that.”
The mindset behind this is safety by design. “Whether that means we can use formal verification methods like, for example, for our safety model, Responsibility-Sensitive Safety [published openly]. Here’s all the mathematical proofs and everything behind it because it’s a formal model, and you can do formal verification with mathematics and logic and things. That’s an example of taking a safety-by-design approach,” he said.
Where things are headed can be illustrated with HD Maps, which are high-precision maps used with autonomous vehicles.
“There’s an incredible amount of data there, but what’s actually happening at this point in time is there’s communication over the Internet while the vehicle is driving, where it can take the GPS location and only send you the changes in the direction that you’re headed,” said David Fritz, senior director, autonomous and ADAS SoCs at Mentor, a Siemens Business. “Instead of many megabytes having to be pulled down, surprisingly enough, it’s a few kilobytes now, even in in complex urban areas.”
The result is that vehicles do not need a huge bandwidth pipe for the telemetry to start getting that information. “What’s most interesting about all of that is the information is also flowing in the opposite direction,” Fritz explained. “Imagine that you’re driving down the road and a tree falls in front of you. The object detection and classification mechanism can then say, ‘Hey, there’s a tree on the road,’ and it sends that information back out to the HD Maps provider so it can tell everybody around you there’s an obstacle in the road. And when they do that, it’s 1k or 2k bytes. It’s not a huge multi-megabyte image of a tree. It is that extra processing on the edge, both coming to the car and going from the car, that’s physically reducing the amount of data that needs to be transmitted. A lot of that data is really not a whole lot more than an XML text-based message. Also, that message can easily be encrypted and it doesn’t take a lot of horsepower to encrypt or decrypt. If it was a huge amount of data, then you’d need more compute power to do the encryption and decryption.”
There’s no question standards will help here. While things are evolving quickly, if standards come along too early, then all these companies that have invested in R&D don’t recoup their investment from developing a proprietary solution. However, when things start to coalesce, then it makes perfect sense. “You want to worry about cost and effectiveness,” Fritz said. “You want to have reuse and all those sorts of things, and standards have a play there. In terms of communication to and from an infrastructure, that makes a lot of sense, as long as it’s built with security in mind from the start.”
Another type of data on the horizon will allow every vehicle in the fleet to potentially have its own configuration.
“In next-generation automotive platforms will be the ability for the customer to do over-the-air application downloads to change the behavior or the experience of the vehicle,” he noted. “It’s essentially software, and it’s going to have to run on the same compute resources that are already there. Something has to make sure whether or not it’s even possible to do that before you allow them to download that particular application. And even if they can do that, then you need to start thinking about how the system adjusts to the fact that there’s a new software application running when something goes wrong. In this scenario, how do I do dynamic redundancy? How does this reconfigure where the software is going, what it’s doing, and what’s most important to spend our efforts on? Now, imagine you have that vehicle, it doesn’t let you download this app that you want, so you pull into the dealership, they slap a new printed circuit board in, and off you go. That means every vehicle in the fleet could be configured differently. Something has to understand that, and OEMs we’re talking to are talking about these massive databases, where they can actually trace every single individual part in that vehicle, its part number, its history, how that relates to requirements, and how that could potentially impact whether or not a particular configuration is available. For example, if you went with a second source set of brake pads, you know because the specs on those brake pads are not exactly like the primary source. Therefore, you need more compute power because you can’t wait as long before you apply braking. That means incredible amounts of data have to be available. It would be like each time you turn your car on, it’s going to go out to the Internet and say, ‘I’m here. Tell me what I can do or what I can’t do.’ This means the interaction between the vehicle itself in the field, and the databases that are holding all this information, that’s probably how 90% of all those infrastructure communications are going to have to happen.”
All of this needs to happen securely, too, because in a vehicle security and safety are closely intertwined.
“The car is talking to the outside world, so there are less hardware-centric aspects of how to securely get data from the mothership, whether it’s connected to some cloud environment or something else, into the car,” said Jason Oberg, CEO of Tortuga Logic. “This could be how to secure the issue of firmware updates to full diagnostics, for example. That’s the interface and the channel to the outside world, which is challenging in and of itself, so it’s independent of the car. But how do you manage that securely? When you actually get into the car, when data is actually there, security can be distilled into three general areas — confidentiality, integrity and availability. The notion of confidentiality is where you’re protecting secrets, which is typically what people think of when they think of security. Integrity has a lot of overlap with functional safety, and protecting something from being modified to permanent securities including someone trying to muck with something or change something. And availability is about making sure the system can always respond.”
Once data enters the car in, say, a firmware update or some other information that needs to get programmed into like an ECU, from a confidentiality standpoint, it must be security managed. “If it’s a firmware update that’s targeted toward something mission-critical, like the braking system, you don’t want that getting plugged in the infotainment system to where someone can just tap into the onboard diagnostic port and pull it out — or something that’s easily accessible because then you could prevent that from getting programmed or you could steal it. A bug could also be found that allows someone to exploit across a fleet,” Oberg said.
Security is a multi-layered challenge. Data needs to be securely transmitted to the vehicle, but it also has to be managed inside the car. That involves both processing and storage of all systems, and maintaining separation between such functions as braking and infotainment.
“Once the data have been computed and deployed, you need to have that isolation maintained throughout the lifecycle,” Oberg said. “At the same time, you also want to make sure the data itself is protected at a component level while it’s actually running because you don’t want someone being able to, in real time or while the systems operating, steal the information or be able to modify it.”
All of these issues are being sorted out behind the scenes, although it’s unlikely consumers will see the fruits of those efforts until at least 2025.
“The path is already set, and every OEM that does this is going to have their own proprietary mechanism for communicating with the mothership to make sure that everything’s going work out fine,” Fritz said. “There’s an opportunity for a company — it could be Microsoft or Cisco or whomever — to start supporting that kind of quasi-realtime data communication to help make those things happen.”
In fact, all sorts of new opportunities are opening up around the edges as this market begins to take shape.
“The functional safety, along with the various functional safety requirements and standards, are demanding things that automotive companies are mandated to follow, but which a lot of them are doing so in very ad hoc manner,” said Simon Rance, vice president of marketing at ClioSoft. “A lot of designs within vehicles are highly configurable, and they’re configurable even on the fly based on the data that they’re getting right from sensors. All of that does have to be traced, because if something goes wrong they’ve got to trace it and figure out what the root cause is. That’s where there’s a need to be filled. The automotive company itself has to mandate the process. Then it’s up to the suppliers, including chipmakers, to conform and get into that type of environment. There’s a standard for accountability, but there doesn’t seem a standard process that these automotive companies are taking.”