Dense Depth Posterior (DDP) From Single Image and Sparse Range. Yang, Y., Wong, A., & Soatto, S. In 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3348–3357, June, 2019. ISSN: 2575-7075doi abstract bibtex We present a deep learning system to infer the posterior distribution of a dense depth map associated with an image, by exploiting sparse range measurements, for instance from a lidar. While the lidar may provide a depth value for a small percentage of the pixels, we exploit regularities reflected in the training set to complete the map so as to have a probability over depth for each pixel in the image. We exploit a Conditional Prior Network, that allows associating a probability to each depth value given an image, and combine it with a likelihood term that uses the sparse measurements. Optionally we can also exploit the availability of stereo during training, but in any case only require a single image and a sparse point cloud at run-time. We test our approach on both unsupervised and supervised depth completion using the KITTI benchmark, and improve the state-of-the-art in both.
@inproceedings{yang_dense_2019,
title = {Dense {Depth} {Posterior} ({DDP}) {From} {Single} {Image} and {Sparse} {Range}},
doi = {10.1109/CVPR.2019.00347},
abstract = {We present a deep learning system to infer the posterior distribution of a dense depth map associated with an image, by exploiting sparse range measurements, for instance from a lidar. While the lidar may provide a depth value for a small percentage of the pixels, we exploit regularities reflected in the training set to complete the map so as to have a probability over depth for each pixel in the image. We exploit a Conditional Prior Network, that allows associating a probability to each depth value given an image, and combine it with a likelihood term that uses the sparse measurements. Optionally we can also exploit the availability of stereo during training, but in any case only require a single image and a sparse point cloud at run-time. We test our approach on both unsupervised and supervised depth completion using the KITTI benchmark, and improve the state-of-the-art in both.},
language = {en},
booktitle = {2019 {IEEE}/{CVF} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},
author = {Yang, Yanchao and Wong, Alex and Soatto, Stefano},
month = jun,
year = {2019},
note = {ISSN: 2575-7075},
keywords = {\#CVPR{\textgreater}19, \#PointCloud, \#Sparse, /unread, 3D from Multiview and Sensors, 3D from Single Image, Benchmark testing, Codes, Computer vision, Deep learning, Laser radar, Point cloud compression, Robotics + Driving, Scene Analysis and Understanding, Training, ⭐⭐⭐⭐⭐},
pages = {3348--3357},
}
Downloads: 0
{"_id":"MzmuLMQsZyjw587BP","bibbaseid":"yang-wong-soatto-densedepthposteriorddpfromsingleimageandsparserange-2019","author_short":["Yang, Y.","Wong, A.","Soatto, S."],"bibdata":{"bibtype":"inproceedings","type":"inproceedings","title":"Dense Depth Posterior (DDP) From Single Image and Sparse Range","doi":"10.1109/CVPR.2019.00347","abstract":"We present a deep learning system to infer the posterior distribution of a dense depth map associated with an image, by exploiting sparse range measurements, for instance from a lidar. While the lidar may provide a depth value for a small percentage of the pixels, we exploit regularities reflected in the training set to complete the map so as to have a probability over depth for each pixel in the image. We exploit a Conditional Prior Network, that allows associating a probability to each depth value given an image, and combine it with a likelihood term that uses the sparse measurements. Optionally we can also exploit the availability of stereo during training, but in any case only require a single image and a sparse point cloud at run-time. We test our approach on both unsupervised and supervised depth completion using the KITTI benchmark, and improve the state-of-the-art in both.","language":"en","booktitle":"2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":[{"propositions":[],"lastnames":["Yang"],"firstnames":["Yanchao"],"suffixes":[]},{"propositions":[],"lastnames":["Wong"],"firstnames":["Alex"],"suffixes":[]},{"propositions":[],"lastnames":["Soatto"],"firstnames":["Stefano"],"suffixes":[]}],"month":"June","year":"2019","note":"ISSN: 2575-7075","keywords":"#CVPR\\textgreater19, #PointCloud, #Sparse, /unread, 3D from Multiview and Sensors, 3D from Single Image, Benchmark testing, Codes, Computer vision, Deep learning, Laser radar, Point cloud compression, Robotics + Driving, Scene Analysis and Understanding, Training, ⭐⭐⭐⭐⭐","pages":"3348–3357","bibtex":"@inproceedings{yang_dense_2019,\n\ttitle = {Dense {Depth} {Posterior} ({DDP}) {From} {Single} {Image} and {Sparse} {Range}},\n\tdoi = {10.1109/CVPR.2019.00347},\n\tabstract = {We present a deep learning system to infer the posterior distribution of a dense depth map associated with an image, by exploiting sparse range measurements, for instance from a lidar. While the lidar may provide a depth value for a small percentage of the pixels, we exploit regularities reflected in the training set to complete the map so as to have a probability over depth for each pixel in the image. We exploit a Conditional Prior Network, that allows associating a probability to each depth value given an image, and combine it with a likelihood term that uses the sparse measurements. Optionally we can also exploit the availability of stereo during training, but in any case only require a single image and a sparse point cloud at run-time. We test our approach on both unsupervised and supervised depth completion using the KITTI benchmark, and improve the state-of-the-art in both.},\n\tlanguage = {en},\n\tbooktitle = {2019 {IEEE}/{CVF} {Conference} on {Computer} {Vision} and {Pattern} {Recognition} ({CVPR})},\n\tauthor = {Yang, Yanchao and Wong, Alex and Soatto, Stefano},\n\tmonth = jun,\n\tyear = {2019},\n\tnote = {ISSN: 2575-7075},\n\tkeywords = {\\#CVPR{\\textgreater}19, \\#PointCloud, \\#Sparse, /unread, 3D from Multiview and Sensors, 3D from Single Image, Benchmark testing, Codes, Computer vision, Deep learning, Laser radar, Point cloud compression, Robotics + Driving, Scene Analysis and Understanding, Training, ⭐⭐⭐⭐⭐},\n\tpages = {3348--3357},\n}\n\n\n\n","author_short":["Yang, Y.","Wong, A.","Soatto, S."],"key":"yang_dense_2019","id":"yang_dense_2019","bibbaseid":"yang-wong-soatto-densedepthposteriorddpfromsingleimageandsparserange-2019","role":"author","urls":{},"keyword":["#CVPR\\textgreater19","#PointCloud","#Sparse","/unread","3D from Multiview and Sensors","3D from Single Image","Benchmark testing","Codes","Computer vision","Deep learning","Laser radar","Point cloud compression","Robotics + Driving","Scene Analysis and Understanding","Training","⭐⭐⭐⭐⭐"],"metadata":{"authorlinks":{}},"downloads":0,"html":""},"bibtype":"inproceedings","biburl":"https://bibbase.org/zotero/zzhenry2012","dataSources":["nZHrFJKyxKKDaWYM8"],"keywords":["#cvpr\\textgreater19","#pointcloud","#sparse","/unread","3d from multiview and sensors","3d from single image","benchmark testing","codes","computer vision","deep learning","laser radar","point cloud compression","robotics + driving","scene analysis and understanding","training","⭐⭐⭐⭐⭐"],"search_terms":["dense","depth","posterior","ddp","single","image","sparse","range","yang","wong","soatto"],"title":"Dense Depth Posterior (DDP) From Single Image and Sparse Range","year":2019}