BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo Networks. Yao, Y., Luo, Z., Li, S., Zhang, J., Ren, Y., Zhou, L., Fang, T., & Quan, L. 2019. cite arxiv:1911.10127Comment: Accepted to CVPR2020
Paper abstract bibtex While deep learning has recently achieved great success on multi-view stereo (MVS), limited training data makes the trained model hard to be generalized to unseen scenarios. Compared with other computer vision tasks, it is rather difficult to collect a large-scale MVS dataset as it requires expensive active scanners and labor-intensive process to obtain ground truth 3D structures. In this paper, we introduce BlendedMVS, a novel large-scale dataset, to provide sufficient training ground truth for learning-based MVS. To create the dataset, we apply a 3D reconstruction pipeline to recover high-quality textured meshes from images of well-selected scenes. Then, we render these mesh models to color images and depth maps. To introduce the ambient lighting information during training, the rendered color images are further blended with the input images to generate the training input. Our dataset contains over 17k high-resolution images covering a variety of scenes, including cities, architectures, sculptures and small objects. Extensive experiments demonstrate that BlendedMVS endows the trained model with significantly better generalization ability compared with other MVS datasets. The dataset and pretrained models are available at ˘rlhttps://github.com/YoYo000/BlendedMVS.
@misc{yao2019blendedmvs,
abstract = {While deep learning has recently achieved great success on multi-view stereo
(MVS), limited training data makes the trained model hard to be generalized to
unseen scenarios. Compared with other computer vision tasks, it is rather
difficult to collect a large-scale MVS dataset as it requires expensive active
scanners and labor-intensive process to obtain ground truth 3D structures. In
this paper, we introduce BlendedMVS, a novel large-scale dataset, to provide
sufficient training ground truth for learning-based MVS. To create the dataset,
we apply a 3D reconstruction pipeline to recover high-quality textured meshes
from images of well-selected scenes. Then, we render these mesh models to color
images and depth maps. To introduce the ambient lighting information during
training, the rendered color images are further blended with the input images
to generate the training input. Our dataset contains over 17k high-resolution
images covering a variety of scenes, including cities, architectures,
sculptures and small objects. Extensive experiments demonstrate that BlendedMVS
endows the trained model with significantly better generalization ability
compared with other MVS datasets. The dataset and pretrained models are
available at \url{https://github.com/YoYo000/BlendedMVS}.},
added-at = {2021-11-24T14:02:13.000+0100},
author = {Yao, Yao and Luo, Zixin and Li, Shiwei and Zhang, Jingyang and Ren, Yufan and Zhou, Lei and Fang, Tian and Quan, Long},
biburl = {https://www.bibsonomy.org/bibtex/2ba96cdf44a6fc9da0ed2497f4d846011/shuncheng.wu},
interhash = {2670198d8a35912c8ac2d6d40ee78838},
intrahash = {ba96cdf44a6fc9da0ed2497f4d846011},
keywords = {dataset deeplearning mvs 3d_reconstruction neural_reconstruction},
note = {cite arxiv:1911.10127Comment: Accepted to CVPR2020},
timestamp = {2021-11-24T14:02:13.000+0100},
title = {BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo
Networks},
url = {http://arxiv.org/abs/1911.10127},
year = 2019
}
Downloads: 0
{"_id":"BFZTdbC8vSrXwf2Ew","bibbaseid":"yao-luo-li-zhang-ren-zhou-fang-quan-blendedmvsalargescaledatasetforgeneralizedmultiviewstereonetworks-2019","author_short":["Yao, Y.","Luo, Z.","Li, S.","Zhang, J.","Ren, Y.","Zhou, L.","Fang, T.","Quan, L."],"bibdata":{"bibtype":"misc","type":"misc","abstract":"While deep learning has recently achieved great success on multi-view stereo (MVS), limited training data makes the trained model hard to be generalized to unseen scenarios. Compared with other computer vision tasks, it is rather difficult to collect a large-scale MVS dataset as it requires expensive active scanners and labor-intensive process to obtain ground truth 3D structures. In this paper, we introduce BlendedMVS, a novel large-scale dataset, to provide sufficient training ground truth for learning-based MVS. To create the dataset, we apply a 3D reconstruction pipeline to recover high-quality textured meshes from images of well-selected scenes. Then, we render these mesh models to color images and depth maps. To introduce the ambient lighting information during training, the rendered color images are further blended with the input images to generate the training input. Our dataset contains over 17k high-resolution images covering a variety of scenes, including cities, architectures, sculptures and small objects. Extensive experiments demonstrate that BlendedMVS endows the trained model with significantly better generalization ability compared with other MVS datasets. The dataset and pretrained models are available at ˘rlhttps://github.com/YoYo000/BlendedMVS.","added-at":"2021-11-24T14:02:13.000+0100","author":[{"propositions":[],"lastnames":["Yao"],"firstnames":["Yao"],"suffixes":[]},{"propositions":[],"lastnames":["Luo"],"firstnames":["Zixin"],"suffixes":[]},{"propositions":[],"lastnames":["Li"],"firstnames":["Shiwei"],"suffixes":[]},{"propositions":[],"lastnames":["Zhang"],"firstnames":["Jingyang"],"suffixes":[]},{"propositions":[],"lastnames":["Ren"],"firstnames":["Yufan"],"suffixes":[]},{"propositions":[],"lastnames":["Zhou"],"firstnames":["Lei"],"suffixes":[]},{"propositions":[],"lastnames":["Fang"],"firstnames":["Tian"],"suffixes":[]},{"propositions":[],"lastnames":["Quan"],"firstnames":["Long"],"suffixes":[]}],"biburl":"https://www.bibsonomy.org/bibtex/2ba96cdf44a6fc9da0ed2497f4d846011/shuncheng.wu","interhash":"2670198d8a35912c8ac2d6d40ee78838","intrahash":"ba96cdf44a6fc9da0ed2497f4d846011","keywords":"dataset deeplearning mvs 3d_reconstruction neural_reconstruction","note":"cite arxiv:1911.10127Comment: Accepted to CVPR2020","timestamp":"2021-11-24T14:02:13.000+0100","title":"BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo Networks","url":"http://arxiv.org/abs/1911.10127","year":"2019","bibtex":"@misc{yao2019blendedmvs,\n abstract = {While deep learning has recently achieved great success on multi-view stereo\r\n(MVS), limited training data makes the trained model hard to be generalized to\r\nunseen scenarios. Compared with other computer vision tasks, it is rather\r\ndifficult to collect a large-scale MVS dataset as it requires expensive active\r\nscanners and labor-intensive process to obtain ground truth 3D structures. In\r\nthis paper, we introduce BlendedMVS, a novel large-scale dataset, to provide\r\nsufficient training ground truth for learning-based MVS. To create the dataset,\r\nwe apply a 3D reconstruction pipeline to recover high-quality textured meshes\r\nfrom images of well-selected scenes. Then, we render these mesh models to color\r\nimages and depth maps. To introduce the ambient lighting information during\r\ntraining, the rendered color images are further blended with the input images\r\nto generate the training input. Our dataset contains over 17k high-resolution\r\nimages covering a variety of scenes, including cities, architectures,\r\nsculptures and small objects. Extensive experiments demonstrate that BlendedMVS\r\nendows the trained model with significantly better generalization ability\r\ncompared with other MVS datasets. The dataset and pretrained models are\r\navailable at \\url{https://github.com/YoYo000/BlendedMVS}.},\n added-at = {2021-11-24T14:02:13.000+0100},\n author = {Yao, Yao and Luo, Zixin and Li, Shiwei and Zhang, Jingyang and Ren, Yufan and Zhou, Lei and Fang, Tian and Quan, Long},\n biburl = {https://www.bibsonomy.org/bibtex/2ba96cdf44a6fc9da0ed2497f4d846011/shuncheng.wu},\n interhash = {2670198d8a35912c8ac2d6d40ee78838},\n intrahash = {ba96cdf44a6fc9da0ed2497f4d846011},\n keywords = {dataset deeplearning mvs 3d_reconstruction neural_reconstruction},\n note = {cite arxiv:1911.10127Comment: Accepted to CVPR2020},\n timestamp = {2021-11-24T14:02:13.000+0100},\n title = {BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo\r\n Networks},\n url = {http://arxiv.org/abs/1911.10127},\n year = 2019\n}\n\n","author_short":["Yao, Y.","Luo, Z.","Li, S.","Zhang, J.","Ren, Y.","Zhou, L.","Fang, T.","Quan, L."],"key":"yao2019blendedmvs","id":"yao2019blendedmvs","bibbaseid":"yao-luo-li-zhang-ren-zhou-fang-quan-blendedmvsalargescaledatasetforgeneralizedmultiviewstereonetworks-2019","role":"author","urls":{"Paper":"http://arxiv.org/abs/1911.10127"},"keyword":["dataset deeplearning mvs 3d_reconstruction neural_reconstruction"],"metadata":{"authorlinks":{}},"html":""},"bibtype":"misc","biburl":"http://www.bibsonomy.org/bib/author/zhang?items=1000","dataSources":["6yXn8CtuzyEbCSr2m"],"keywords":["dataset deeplearning mvs 3d_reconstruction neural_reconstruction"],"search_terms":["blendedmvs","large","scale","dataset","generalized","multi","view","stereo","networks","yao","luo","li","zhang","ren","zhou","fang","quan"],"title":"BlendedMVS: A Large-scale Dataset for Generalized Multi-view Stereo Networks","year":2019}