Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection. Wang, T., Hu, X., Liu, Z., & Fu, C. November, 2022. arXiv:2211.13067 [cs]
Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection [link]Paper  abstract   bibtex   
LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts.
@misc{wang_sparse2dense_2022,
	title = {{Sparse2Dense}: {Learning} to {Densify} {3D} {Features} for {3D} {Object} {Detection}},
	shorttitle = {{Sparse2Dense}},
	url = {http://arxiv.org/abs/2211.13067},
	abstract = {LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts.},
	language = {en},
	urldate = {2023-08-07},
	publisher = {arXiv},
	author = {Wang, Tianyu and Hu, Xiaowei and Liu, Zhengzhe and Fu, Chi-Wing},
	month = nov,
	year = {2022},
	note = {arXiv:2211.13067 [cs]},
	keywords = {Computer Science - Computer Vision and Pattern Recognition},
}

Downloads: 0