DTAM: Dense Tracking and Mapping in Real-Time. Newcombe, R., A., Lovegrove, S., J., & Davison, A., J. In IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, November 6-13, 2011, pages 2320--2327, 2011.
abstract   bibtex   
DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex opti-misation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.
@inproceedings{
 title = {DTAM: Dense Tracking and Mapping in Real-Time},
 type = {inproceedings},
 year = {2011},
 pages = {2320--2327},
 id = {414947f8-cc24-3dcc-bb9e-0c6065da702c},
 created = {2022-09-08T06:32:14.575Z},
 accessed = {2022-09-08},
 file_attached = {true},
 profile_id = {48fc0258-023d-3602-860e-824092d62c56},
 group_id = {5ec9cc91-a5d6-3de5-82f3-3ef3d98a89c1},
 last_modified = {2022-09-12T10:25:30.464Z},
 read = {false},
 starred = {false},
 authored = {false},
 confirmed = {false},
 hidden = {false},
 folder_uuids = {97693603-b330-4e3e-8cf5-d549e6474921},
 private_publication = {false},
 abstract = {DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex opti-misation framework. Interleaved, we track the camera's 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real-time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application.},
 bibtype = {inproceedings},
 author = {Newcombe, Richard A and Lovegrove, Steven J and Davison, Andrew J},
 booktitle = {IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, November 6-13, 2011}
}

Downloads: 0