Spectral edge: gradient-preserving spectral mapping for image fusion. Connah, D., Drew, M. S., & Finlayson, G. D. Journal of the Optical Society of America A, 32(12):2384–2396, December, 2015.
Spectral edge: gradient-preserving spectral mapping for image fusion [link]Paper  doi  abstract   bibtex   
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple ?ansatz? (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any ?-D image data to any ?-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping ?-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.
@article{uea56308,
          volume = {32},
          number = {12},
           month = {December},
          author = {David Connah and Mark S. Drew and Graham D. Finlayson},
           title = {Spectral edge: gradient-preserving spectral mapping for image fusion},
            year = {2015},
         journal = {Journal of the Optical Society of America A},
             doi = {10.1364/JOSAA.32.002384},
           pages = {2384--2396},
        keywords = {image processing,digital image processing,image analysis,image enhancement,infrared imaging,multispectral imaging,hyperspectral imaging},
             url = {https://ueaeprints.uea.ac.uk/id/eprint/56308/},
        abstract = {This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple ?ansatz? (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any ?-D image data to any ?-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping ?-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.}
}

Downloads: 0