Image Super-Resolution Using Deep Convolutional Networks. Dong, C., Loy, Change, C., He, K., & Tang, X. arXiv:1501.00092 [cs], December, 2014. 00000 arXiv: 1501.00092
Image Super-Resolution Using Deep Convolutional Networks [link]Paper  abstract   bibtex   
We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.
@article{ dong_image_2014,
  title = {Image {Super}-{Resolution} {Using} {Deep} {Convolutional} {Networks}},
  url = {http://arxiv.org/abs/1501.00092},
  abstract = {We propose a deep learning method for single image super-resolution (SR). Our method directly learns an end-to-end mapping between the low/high-resolution images. The mapping is represented as a deep convolutional neural network (CNN) that takes the low-resolution image as the input and outputs the high-resolution one. We further show that traditional sparse-coding-based SR methods can also be viewed as a deep convolutional network. But unlike traditional methods that handle each component separately, our method jointly optimizes all layers. Our deep CNN has a lightweight structure, yet demonstrates state-of-the-art restoration quality, and achieves fast speed for practical on-line usage. We explore different network structures and parameter settings to achieve trade-offs between performance and speed. Moreover, we extend our network to cope with three color channels simultaneously, and show better overall reconstruction quality.},
  urldate = {2015-01-14TZ},
  journal = {arXiv:1501.00092 [cs]},
  author = {Dong, Chao and Loy, Chen Change and He, Kaiming and Tang, Xiaoou},
  month = {December},
  year = {2014},
  note = {00000 
arXiv: 1501.00092},
  keywords = {deep learning, reading}
}

Downloads: 0