[video] applies ‘Deep Teaching’ to Level 2 to 4 autonomous vehicles applies ‘Deep Teaching’ to Level 2 to 4 autonomous vehicles

For robots and vehicles to become more autonomous, developers are looking for ways to build artificial intelligence that require less data and laborious annotation. Inc. last month announced “Deep Teaching,” which it described as a new methodology to train neural networks without human annotation, supervision, or simulation.

The Menlo Park, Calif.-based startup claimed that Deep Teaching can deliver computer vision performance faster and more accurately than current methods. added that it can train on vast volumes of data more efficiently without needing large-scale fleets or numerous human annotators.

“Traditional AI approaches that rely upon manually annotated data are wholly unsuited to meet the needs of autonomous driving and other safety-critical systems that require human-level computer vision accuracy,” said Vlad Voroninski, CEO of “Deep Teaching is a breakthrough in unsupervised learning that enables us to tap into the full power of deep neural networks by training on real sensor data without the burden of human annotation nor simulation.”

“In the current market, annotation costs hundreds of dollars per image, and one vehicle can collect tens of millions of images per day,” he told The Robot Report“Humans don’t just learn to drive through practice; we already understand some things from operating in the world, and we can interpret scenarios.”

Deep Teaching learns without prior data

In the first use case of’s Deep Teaching technology, it trained a neural network to detect lanes on tens of millions of images from thousands of different dashcam videos from across the world without any human annotation or simulation. It was then able to handle corner cases well known to be difficult in the autonomous driving industry, such as rain, fog, glare, faded/missing lane markings and various illumination conditions. said that it was able to using this neural network to surpass public computer vision benchmarks with minimal engineering effort and a fraction of the cost and time required by traditional deep learning methods.

“We’ve developed the ability to train on raw sensor data without annotation or simulation,” Voroninski said. “By reducing the capital cost of learning from more images, we get more accurate results and more generalizable artificial intelligence.”

In addition, has built a full stack of software, enabling a vehicle to steer autonomously on steep and curvy mountain roads using only one camera and one GPU but no maps, no lidar, and no GPS. The system worked without prior training on data from these roads, said the company.

“The self-driving stack includes sensor data, a perception layer that interprets that data, an intention-prediction model that understands how agents might react in future, a path-planning module, and vehicle-control stack to implement decisions,” Voroninski explained. “The control part is more or less solved, but quite a lot of heavy lifting happens at the perception and intent-prediction steps.”

“When we first entered this space, we examined approaches that other companies were taking,” said Voroninski. “Traditional AI was not enough. A lot of research and development has been needed to get to a reasonable point, but we had some unique advantages from merging our experience with applied mathematics and compressive sensing with our understanding of deep learning. At Helm, we have a small team of people with top skills in AI R&D focused on building a product.”

Since then, has applied Deep Teaching to semantic segmentation for dozens of object categories, monocular vision depth prediction, pedestrian intent modeling, lidar-vision fusion, and automation of HD mapping.

Benchmarks and awards claimed that its Deep Teaching system has surpassed Tesla in performance benchmarks, noting that it has received recognition at Tech.AD Detroit.

“The metric of number of miles driven or how much fleet data is collected doesn’t indicate success,” said Voroninski. “Proving that the perception stack is able to make the right decisions is harder to convey. By putting videos from around the world and adversarial models, we achieved generalization to handle corner cases.”

“We wanted to put our system under the same constraints as a production system,” he said. “We didn’t want to overfit to a model, and since we can’t control where a vehicle is driven, we tried the system in entirely new scenarios.”

Safety and L2 to L4 vehicles

AI and machine vision applications such as Web searches or parts inspections are not as time- and safety-critical as autonomous vehicles, said The company said that its approach to “economical training on huge datasets of images and other sensor data” will benefit the self-driving car industry.

“’s self-driving technologies are uniquely suited to deliver on the potential of autonomous driving,” said Quora CEO Adam D’Angelo. “I look forward to the advances the team will continue to make in the years to come and am excited to have invested in the company.”

At the same time, is focusing on advanced driver-assist systems (ADAS) rather than Level 5 or fully autonomous vehicles. “We don’t expect breakthrough in hardware modalities,” said Voroninski. “Being able to approach the human eye is great, but the bottleneck is on the inference side, in interpreting sensor data.”’s demonstrations have used a single camera, but other sensors could be helpful on the path to autonomy, Voroninski acknowledged. “For example, radar gives more redundancy and robustness in rain, snow, or fog,” he said. “Lidar gets information on depth accuracy, but it can bounce off of dust clouds, which is not acceptable for safe vehicles.”

Other opportunities for Deep Teaching

In addition to autonomous vehicles, Deep Teaching could be useful in aviation, robotics, manufacturing, and retail, said

“We didn’t know how generalized Deep Teaching could be, but as we developed the technology, we discovered it was quite general,” said Voroninski. “It doesn’t matter to us whether we’re turning on the neural network to classify objects or pedestrians for vehicles or delivery robots.”

“There are opportunities for Helm in safety-critical systems that interact with the wold and necessitate a high-level AI stack,” he said. “We are already working with several automotive and fleet manufacturers.” raised $13 million in seed funding in March, before the COVID-19 pandemic significantly affected the U.S.

“The vast majority of what we do is software development, so we can be effective remotely,” Voroninski said. “We can test eventually on live vehicles. The situation has highlighted the need for automation, which will speed up. But by the time robotaxis actually launch at scale, hopefully, COVID won’t be an issue by then.”

“Our value proposition to the ecosystem is stable — providing high-value autonomy software,” he said.


What do you think?

484 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Silicon Chip Improves LiDAR Systems Without the Use of Electronics

Silicon Chip Improves LiDAR Systems Without the Use of Electronics

Hamburg autonomous bus project moves into second phase of trials

Hamburg autonomous bus project moves into second phase of trials