About the Talk: While deep reinforcement learning applied to robotic manipulation has seen a number of recent successes in constrained environments, generalist robots that can operate in diverse, real world settings have remained out of reach. Critically, robot learning algorithms have yet to be able to learn from a sufficient breadth of data that can enable broad generalization across tasks and environments. In this talk, I’ll discuss the paradigm of offline learning for robotics as a path towards generalist robots, and how we might supervise this offline learning process in a scalable way using crowdsourced language and videos of humans. Specifically, I’ll cover three recent papers which learn reward functions and visual representations for robot learning through language annotations of pre-collected robot datasets and human video datasets which exist on the web.
About the Speaker: Suraj Nair is a PhD candidate at Stanford University, where he is advised by Professors Chelsea Finn and Silvio Savarese. His research interests are in developing learning algorithms for robots that can enable them to generalize across novel tasks, objects, and environments. To this end, his work has included systems and methods for collecting real robot datasets autonomously, offline reinforcement learning algorithms for learning multi-task agents from pre-collected data, and methods for incorporating scalable supervision from language and video into the robot learning process. Prior to his PhD Suraj completed his Bachelors in Computer Science at the California Institute of Technology, and has spent time at Google Brain Robotics and Facebook AI Research.