Multi-modality registration via multi-scale textural and spectral embedding representations. Li, L., Rusu, M., Viswanath, S., Penzias, G., Pahwa, S., Gollamudi, J., & Madabhushi, A. In Progress in Biomedical Optics and Imaging - Proceedings of SPIE, volume 9784, 2016.
doi  abstract   bibtex   
Intensity-based similarity measures assume that the original signal intensity of different modality images can provide statistically consistent information regarding the two modalities to be co-registered. In multi-modal registration problems, however, intensity-based similarity measures are often inadequate to identify an optimal transformation. Texture features can improve the performance of the multi-modal co-registration by providing more similar appearance representations of the two images to be co-registered, compared to the signal intensity representations. Furthermore, texture features extracted at different length scales (neighborhood sizes) can reveal similar underlying structural attributes between the images to be co-registered similarities that may not be discernible on the signal intensity representation alone. However one limitation of using texture features is that a number of them may be redundant and dependent and hence there is a need to identify non-redundant representations. Additionally it is not clear which features at which specific scales reveal similar attributes across the images to be co-registered. To address this problem, we introduced a novel approach for multimodal co-registration that employs new multi-scale image representations. Our approach comprises 4 distinct steps: (1) texure feature extraction at each length scale within both the target and template images, (2) independent component analysis (ICA) at each texture feature length scale, and (3) spectrally embedding (SE) the ICA components (ICs) obtained for the texture features at each length scale, and finally (4) identifying and combining the optimal length scales at which to perform the co-registration. To combine and co-register across different length scales, -mutual information (-MI) was applied in the high dimensional space of spectral embedding vectors to facilitate co-registration. To validate our multi-scale co-registration approach, we aligned 45 pairs of prostate MRI and histology images corresponding to the prostate with the objective of mapping extent of prostate cancer annotated by a pathologist on the pathology onto the pre-operative MRI. The registration results showed higher correlation between template and target images with average correlation ratio of 0.927 compared to 0.914 for intensity-based registration. Additionally an improvement in the dice similarity coefficient (DSC) of 13.6% was observed for the multi-scale registration compared to intensity-based registration and an 1.26% DSC improvement compared to registration involving the best individual scale.
@inproceedings{
 title = {Multi-modality registration via multi-scale textural and spectral embedding representations},
 type = {inproceedings},
 year = {2016},
 keywords = {Co-registration,ICA,Spectral embedding,α MI},
 volume = {9784},
 id = {0a2a4933-089e-3290-badb-bf810b66de79},
 created = {2023-10-25T08:56:40.228Z},
 file_attached = {false},
 profile_id = {eaba325f-653b-3ee2-b960-0abd5146933e},
 last_modified = {2023-10-25T08:56:40.228Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {false},
 hidden = {false},
 private_publication = {true},
 abstract = {Intensity-based similarity measures assume that the original signal intensity of different modality images can provide statistically consistent information regarding the two modalities to be co-registered. In multi-modal registration problems, however, intensity-based similarity measures are often inadequate to identify an optimal transformation. Texture features can improve the performance of the multi-modal co-registration by providing more similar appearance representations of the two images to be co-registered, compared to the signal intensity representations. Furthermore, texture features extracted at different length scales (neighborhood sizes) can reveal similar underlying structural attributes between the images to be co-registered similarities that may not be discernible on the signal intensity representation alone. However one limitation of using texture features is that a number of them may be redundant and dependent and hence there is a need to identify non-redundant representations. Additionally it is not clear which features at which specific scales reveal similar attributes across the images to be co-registered. To address this problem, we introduced a novel approach for multimodal co-registration that employs new multi-scale image representations. Our approach comprises 4 distinct steps: (1) texure feature extraction at each length scale within both the target and template images, (2) independent component analysis (ICA) at each texture feature length scale, and (3) spectrally embedding (SE) the ICA components (ICs) obtained for the texture features at each length scale, and finally (4) identifying and combining the optimal length scales at which to perform the co-registration. To combine and co-register across different length scales, -mutual information (-MI) was applied in the high dimensional space of spectral embedding vectors to facilitate co-registration. To validate our multi-scale co-registration approach, we aligned 45 pairs of prostate MRI and histology images corresponding to the prostate with the objective of mapping extent of prostate cancer annotated by a pathologist on the pathology onto the pre-operative MRI. The registration results showed higher correlation between template and target images with average correlation ratio of 0.927 compared to 0.914 for intensity-based registration. Additionally an improvement in the dice similarity coefficient (DSC) of 13.6% was observed for the multi-scale registration compared to intensity-based registration and an 1.26% DSC improvement compared to registration involving the best individual scale.},
 bibtype = {inproceedings},
 author = {Li, L. and Rusu, M. and Viswanath, S. and Penzias, G. and Pahwa, S. and Gollamudi, J. and Madabhushi, A.},
 doi = {10.1117/12.2217639},
 booktitle = {Progress in Biomedical Optics and Imaging - Proceedings of SPIE}
}

Downloads: 0