Optimizing Hardware Faster.

Optimizing Hardware Faster

Silexica’s CEO talks about why high-level synthesis has become so important in cars, financial trading, robotics and aerospace.

Maximillian Odendahl, CEO of Silexica, sat down with Semiconductor Engineering to talk about high-level synthesis and the changing role of this technology in everything from automotive and robotics to AI. What follows are excerpts of that conversation.

SE: Automotive is a big market for AI, but it seems as if the whole industry has put the brakes on the autonomous driving effort. What’s changed?

Odendahl: This isn’t that surprising. Two years ago, I spoke with an automotive executive who told me the whole automotive industry has no clue about what is going to happen and how difficult it will be to do verification and validation. Since then, the whole German car industry has slashed autonomous driving technology development because of reliability issues.

SE: What does this mean for the direction that high-level synthesis is heading in?

Odendahl: The direction hasn’t changed, but in the past HLS was not usable by the software guys. The main push right now is aerospace and defense and radar, and we are getting a lot of interest from the financial industry for things like low-latency trading. All of them have used FPGAs for a long time. So it’s not a new application, but they haven’t enabled their software guys to use HLS over the past 10 years even though they wanted to use it.

SE: In aerospace and defense, they’ve been working with single-core processors at older nodes. That’s changing.

Odendahl: And for a portable or common filter, that’s perfect for using as much parallelism as you can. That was always one of the best things about an FPGA. There is unlimited parallelism. So for beamforming for radar or LiDAR applications, that’s perfect.

SE: In the 1990s, quantitative analysts built some pretty sophisticated models for Wall Street firms, but they weren’t able to keep up with the daily changes. What’s different today?

Odendahl: There’s a new trading algorithm coming out that is very fast, but they don’t have hardware guys in the loop. Who’s going to do that? If you can use HLS and put it on an FPGA, that’s a huge competitive advantage. If you need a year to develop a chip, you miss the market because someone already came up with a better algorithm.

SE: Software always has been approximate, whereas hardware is rigid. Does HLS, which is a higher-level of abstraction, allow you to bridge these worlds?

Odendahl: It was positioned to be high-level, where you could push a button and be done. What’s changed is that you’re fiddling with all of the different pragmas. If you think about area partitioning or area reshape pragmas, you’re telling the HLS compiler exactly what it needs to do in terms of accessing your memory. So you can get very detailed, and that’s why hardware guys use it as a productivity tool. ‘I know exactly what I’m doing, so I know how to do the pragma, and I use it for verification instead of doing it in Verilog.’ They push a button and they’re done. But they didn’t know how to fiddle with the pragmas, so they gave up. What we’re saying is that you now can enter the pragmas automatically, and that’s how you get a speed-up. So you have hardware expertise, but you need software help. For example, what are your branch patterns or access patterns? How do you access your area? Is it sequential or random? All of that defines what pragmas you can use, and that defines what interfaces you can use. But software guys don’t really think about that. They just want to write algorithms.

SE: So you’re creating a bridge between hardware and software, but don’t you still need expertise in both?

Odendahl: Exactly, and this is why I don’t think we’ll ever replace the hardware guys. Maybe you provide an algorithm, or you provide 10 algorithms because you don’t know how it will work on the hardware. But if you can enable the software guy to provide the top 2 of those at 80% of the performance you need, you can then work to get the last 20% or do the integration into the overall design. It’s a very different starting point than, ‘I generated an algorithm but you don’t know what it’s doing.’

SE: One of the big trends is to minimize the movement of data, so we’re seeing computing being done on the network interface card, or at the sensor inside a car, or even inside a hearing aid. Where does HLS fit into this, particularly in terms of optimizing performance and power?

Odendahl: If you look at a hearing aid, there’s a lot of noise when you’re in a public place, but not at home. The AI systems in cars are excellent at that. You want to be able reconfigure that or use a different algorithm. With innovation happening faster, algorithms change faster, and you need more programmability. That equates to higher usage of FPGAs.

SE: What’s the perception of HLS in terms of experimenting with different AI/ML options, and how has that changed?

Odendahl: We had to validate whether it’s real or whether it’s been around for a long time and now one cares. Every conversation was that people wanted to use it but were scared, or they had used it and failed. So it’s somewhat a niche, but it’s a good niche to be in. It’s really hard to fix it, and it seems to be growing a lot. There are a lot of people who say, ‘We’re not using it yet, but we really want to use it.’ There is no saturation.

SE: Can HLS provide some of the visibility of what generally is a black-box technology? If an algorithm goes awry, can HLS help shed light on that?

Odendahl: No, that has to come from the algorithm. HLS doesn’t make the hardware smarter or add a runtime component. It’s a compiler. It’s a productivity tool. Does that get any additional knowledge? No. If you want more knowledge and dynamic behavior, though, you can code that in C.

SE: Where are your next opportunities?

Odendahl: What we’re working on now is a new performance-testing platform for robotics and autonomous driving. We’ve always said that dynamic behavior is one of the big problems, but what is that really? If you think of a robotics stack, it’s several layers. The middle layer is the robotic operating system and robotic tuning and adaptive AUTOSAR. The middleware is important for LiDAR, sensor fusion. You have all those different independent models, but they all work at the same time. In order to coordinate them, you need a network. You have the operating system below. It could be Linux or QNX or VxWorks. So you have an application layer, a middleware layer and the system layer. So how does this behave? If you have LiDAR and sensor fusion, what happens if a certain packet doesn’t reach that node? What do you do? Was it sent to the wrong place or were the queues full? Did it get lost or was the load too high? For the past 20 years, no one has thought about that. In the past, you could stop and wait, but with a car you can’t do that, so the issue is now much bigger.

SE: And for robotics, that includes drones as well, right?

Odendahl: That’s correct. Right now you write a bunch of logs and try to correlate them. What we’re doing instead is looking at dynamic behavior and how that correlates with logical architectural part of your design. So the first thing you look at is whether you can just enter some pragmas to optimize the code. The next step is to correlate all of the events with each other. Then you really want to do root cause analysis. If a packet didn’t arrive, you need to look at this log. That’s very different than a debugger where you aren’t sure where to start.

SE: Does AI or machine learning enter into that?

Odendahl: Yes, because we’re doing data analytics. Some parts are going to be very specific, where you look at an outlier. ‘We found this, so we’d better look at that.’

SE: Basically, you’re comparing that to other patterns to see what has changed?

Odendahl: Yes. In the past, this was just math. But now we can now bring in data analytics for embedded systems to give you this multi-level view.

SE: So AI, machine learning, plus high-level synthesis can look at patterns in useful data?

Odendahl: Right now there is a lot of noise. So you can do printf, but there’s no way you’re going to look at a dynamic system with that.

SE: This has applications that go way beyond where HLS has operated in the past.

Odendahl: Absolutely, and you can use that to go into the ASIC or FPGA market. If you can find the outlier, it tells you about traceability and problems that you found earlier that you don’t have to recall. It’s a whole different game.

SE: And now you’re capable of predicting behavior and functionality?

Odendahl: Exactly. And the key problem isn’t just about x86 or a supercomputer in a car. The real test is going to be when you have AI, an ASIC, a GPU, a multi-core chip with a PHY or an FPGA. In each of those you have a big heterogeneous software stack. Multiply that times heterogeneous hardware, and now you do object recognition and people recognition and pathfinding. But what do you do if it doesn’t work? Who do you call? You have all of this isolation and all of these different teams, but what happens when you integrate it?

SE: And what happens when the logic becomes distributed around a system?

Odendahl: The goal is a large integration platform. This is like Google Earth. You can use that for debugging. One of the key value propositions of HLS is that you don’t have to do everything in RTL, which is pretty late in the design, and after waiting for various tools. You can do everything at the C level. And because you can do faster simulation, you also can iterate faster. You can iterate at the algorithmic level. So now you already have a good idea how everything is going to behave and how it’s going to fit in terms of area and performance. You still need to do the end verification of the hardware after place-and-route and everything else, but hopefully that’s the last piece — and not the core piece. You don’t want to have to do the core piece over and over again.

Source: semiengineering.com

What do you think?

358 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

What lessons can we take from the fatal accident in Arizona in 2018 involving an autonomous vehicle?

What lessons can we take from the fatal accident in Arizona in 2018 involving an autonomous vehicle?

How Will Lidar Technologies and Business Landscape Evolve?

How Will Lidar Technologies and Business Landscape Evolve?