2nd Workshop 3D-Deep Learning for Autonomous Driving, IV 2020 Las Vegas
Title : Designing Cameras to Detect the “Invisible”: Computational Imaging for Adverse Conditions
Speaker : Felix Heide CTO at Algolux | Incoming Professor at Princeton University
Abstract : Imaging has become an essential part of how we communicate with each other, how autonomous agents sense the world and act independently, and how we research chemical reactions and biological processes.
Today’s imaging and computer vision systems, however, often fail for the “edge cases”, for example in low light, fog, snow, or highly dynamic scenes. These edge cases are a result of ambiguity present in the scene or signal itself, and ambiguity introduced by imperfect capture systems. In this talk, I will present several examples of computational imaging methods that resolve this ambiguity by jointly designing sensing and computation for domain-specific applications. Instead of relying on intermediate image representations, which are often optimized for human viewing, these cameras are designed end-to-end for a domain-specific task.
In particular, I will show how to co-design optics, sensors and ISP for automotive HDR ISPs, detection and tracking (beating Tesla’s latest OTA Model S Autopilot), how to optimize thin freeform lenses for wide field of view applications, and how to extract accurate dense depth from three gated images (beating scanning lidar, such as Velodyne’s HDL64).
Finally, I will present computational imaging systems that extract domain-specific information from faint measurement noise using domain-specific priors, allowing us to use conventional intensity cameras or conventional Doppler radar to image “hidden” objects outside the direct line of sight at ranges of more than 20m.