2-Step Sparse-View CT Reconstruction with a Domain-Specific Perceptual Network. Wei, H., Schiffers, F., Würfl, T., Shen, D., Kim, D., Katsaggelos, A. K., & Cossairt, O. arXiv preprint arXiv:2012.04743, dec, 2020.
2-Step Sparse-View CT Reconstruction with a Domain-Specific Perceptual Network [link]Paper  abstract   bibtex   
Computed tomography is widely used to examine internal structures in a non-destructive manner. To obtain high-quality reconstructions, one typically has to acquire a densely sampled trajectory to avoid angular undersampling. However, many scenarios require a sparse-view measurement leading to streak-artifacts if unaccounted for. Current methods do not make full use of the domain-specific information, and hence fail to provide reliable reconstructions for highly undersampled data. We present a novel framework for sparse-view tomography by decoupling the reconstruction into two steps: First, we overcome its ill-posedness using a super-resolution network, SIN, trained on the sparse projections. The intermediate result allows for a closed-form tomographic reconstruction with preserved details and highly reduced streak-artifacts. Second, a refinement network, PRN, trained on the reconstructions reduces any remaining artifacts. We further propose a light-weight variant of the perceptual-loss that enhances domain-specific information, boosting restoration accuracy. Our experiments demonstrate an improvement over current solutions by 4 dB.
@article{Haoyu2020,
abstract = {Computed tomography is widely used to examine internal structures in a non-destructive manner. To obtain high-quality reconstructions, one typically has to acquire a densely sampled trajectory to avoid angular undersampling. However, many scenarios require a sparse-view measurement leading to streak-artifacts if unaccounted for. Current methods do not make full use of the domain-specific information, and hence fail to provide reliable reconstructions for highly undersampled data. We present a novel framework for sparse-view tomography by decoupling the reconstruction into two steps: First, we overcome its ill-posedness using a super-resolution network, SIN, trained on the sparse projections. The intermediate result allows for a closed-form tomographic reconstruction with preserved details and highly reduced streak-artifacts. Second, a refinement network, PRN, trained on the reconstructions reduces any remaining artifacts. We further propose a light-weight variant of the perceptual-loss that enhances domain-specific information, boosting restoration accuracy. Our experiments demonstrate an improvement over current solutions by 4 dB.},
archivePrefix = {arXiv},
arxivId = {2012.04743},
author = {Wei, Haoyu and Schiffers, Florian and W{\"{u}}rfl, Tobias and Shen, Daming and Kim, Daniel and Katsaggelos, Aggelos K. and Cossairt, Oliver},
eprint = {2012.04743},
journal = {arXiv preprint arXiv:2012.04743},
month = {dec},
title = {{2-Step Sparse-View CT Reconstruction with a Domain-Specific Perceptual Network}},
url = {http://arxiv.org/abs/2012.04743},
year = {2020}
}

Downloads: 0