Real-time Rendering of Virtual-viewpoint Images from Multi-view Camera Images Using Projection Mapping. TAKENAKA, F., FUJIMOTO, T., HALABI, O., & CHIBA, N. The Journal of the Society for Art and Science, 10(4):263–275, 2011.
abstract   bibtex   
This paper proposes a method, which is based on a volume intersection method by utilizing projection mappings, to reconstruct and render the 3D shape of an object in real time from images captured by multiple video cameras. The proposed method imitates a voxel-based volume intersection method on a pseudo-voxel space, which is constructed by placing orthogonally three sets of parallel equally-spaced planes in the 3D space. In each frame time, first, the frame image of each camera is processed to generate a projection texture having different alpha values on foreground (object) pixels and background ones. Then, the projection texture of every camera is projected onto the parallel planes by projection mapping functions of a graphic library. Finally, the visual hull of the object is efficiently reconstructed as the intersection of all cameras' projection regions in the 3D space by alpha operations of the graphic library, and is rendered using color values. The proposed method is more efficient than a conventional voxel-based volume intersection method, and is able to generate virtual viewpoint images with a moving object in real time. 263
@article{TAKENAKAFumioHALABIOsamaCHIBANorishige2011,
abstract = {This paper proposes a method, which is based on a volume intersection method by utilizing projection mappings, to reconstruct and render the 3D shape of an object in real time from images captured by multiple video cameras. The proposed method imitates a voxel-based volume intersection method on a pseudo-voxel space, which is constructed by placing orthogonally three sets of parallel equally-spaced planes in the 3D space. In each frame time, first, the frame image of each camera is processed to generate a projection texture having different alpha values on foreground (object) pixels and background ones. Then, the projection texture of every camera is projected onto the parallel planes by projection mapping functions of a graphic library. Finally, the visual hull of the object is efficiently reconstructed as the intersection of all cameras' projection regions in the 3D space by alpha operations of the graphic library, and is rendered using color values. The proposed method is more efficient than a conventional voxel-based volume intersection method, and is able to generate virtual viewpoint images with a moving object in real time. 263},
author = {TAKENAKA, Fumio and FUJIMOTO, Tadahiro and HALABI, Osama and CHIBA, Norishige},
file = {:C\:/Users/ohala/Dropbox/_osama/Documents/MyData/Work/MendeleyPapers//TAKENAKA et al. - 2011 - Real-time Rendering of Virtual-viewpoint Images from Multi-view Camera Images Using Projection Mapping.pdf:pdf},
journal = {The Journal of the Society for Art and Science},
keywords = {My Journal},
mendeley-tags = {My Journal},
number = {4},
pages = {263--275},
title = {{Real-time Rendering of Virtual-viewpoint Images from Multi-view Camera Images Using Projection Mapping}},
volume = {10},
year = {2011}
}

Downloads: 0