in

(video) Are we hungry of 3D Lidar Data for semantic segmentation? SemanticPOSS Biao Gao (Peking university)

2nd Workshop 3D-Deep Learning for Autonomous Driving at IV 2020 Las Vegas

Speaker : Biao Gao (Peking university)
Title : Are we hungry of 3D Lidar Data for semantic segmentation? A new dataset SemanticPOSS and the researches at PKU-POSS

Abstract :

This talk will introduce the recent published SemanticPOSS dataset and give an overview of researches in PKU Intelligent Vehicle Group (POSS, http://www.poss.pku.edu.cn) on 3D LiDAR semantic scene understanding.

Nowadays, researches of 3D LiDAR semantic scene understanding mostly face the challenge of “data hungry”, especially for deep network based models. We will present our investigation results about the “data hungry” situation in the domain from different viewpoints.

The challenge of “data hungry” urges us to make the new dataset out of the ordinary: SemanticPOSS, which is a point-level point cloud dataset collected in Peking University. The main feature of this dataset is large quantity of dynamic instances inside. For example, the average instances of pedestrians are up to 8.29 per frame, which is more than 10 times denser than KITTI and SemanticKITTI.

The rich dynamic instances provide more challenging and diverse environment for autonomous driving system, and fill in some blanks of crowded dynamic scenes among public datasets.
By the way, we will introduce our work about 3D LiDAR semantic segmentation focused on solving “data hungry” problem. Concretely, weakly and semi-supervised learning algorithms applied to different scenarios will be presented.

What do you think?

227 points
Upvote Downvote

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

(video) Vayyar’s Automotive 4D Imaging Radar Solutions

(video) Echodyne’s EchoGuard 3D Surveillance Radar