(video) LAMBDA: Advancing Autonomous Reasoning with Nuro’s MLLM

In this video, we present LAMBDA, Nuro’s Multimodal Large Language Model (MLLM), and its integration into The Nuro Driver’s™ onboard autonomy stack. LAMBDA demonstrates strong reasoning capabilities and enables interactive user experiences within our autonomous vehicles.

Part 1: Retroactive Implementation (0:03)
We showcase LAMBDA’s reasoning skills in three distinct scenes through retroactive implementation. LAMBDA navigates complex scenarios, demonstrating a solid understanding of the environment and decision-making processes.

Part 2: Real-Time Integration (1:17)
Experience LAMBDA running in real-time inside one of our test fleet vehicles. Integrated into The Nuro Driver’s™ autonomy stack, LAMBDA allows users to ask questions and receive informative commentary on the observed world and autonomy decision-making through an in-car interface.

LAMBDA marks a notable step forward in the development of L4 driverless systems, highlighting the potential of Multimodal Large Language Models in enhancing autonomous vehicle technology. This progress contributes to the ongoing efforts in creating safer, more efficient, and user-friendly autonomous driving experiences.

Join us as we continue to advance AI autonomy and explore career opportunities at Nuro:


What do you think?

448 Points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

GIPHY App Key not set. Please check settings

(video) COAST: The Right Solution!

(video) Cyngn Q1 2024 In Review – Key Milestones & Achievements