Improving supervised classification accuracy using non-rigid multimodal image registration: Detecting prostate cancer. Chappelow, J., Viswanath, S., Monaco, J., Rosen, M., Tomaszewski, J., Feldman, M., & Madabhushi, A. In Progress in Biomedical Optics and Imaging - Proceedings of SPIE, volume 6915, 2008.
doi  abstract   bibtex   
Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP): the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment. of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affine schemes; one based on maximization of mutual information (MT) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMT), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two alline registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MH1, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that obtained from expert annotation. These results suggest that in the presence of additional multimodal image information one can obtain more accurate object annotations than achievable via expert delineation despite vast differences between modalities that hinder image registration.
@inproceedings{
 title = {Improving supervised classification accuracy using non-rigid multimodal image registration: Detecting prostate cancer},
 type = {inproceedings},
 year = {2008},
 keywords = {Analysis,Bayesian classifier,CAD,COFEMI,Dimensionality reduction,Histology,Independent component,MRI,Multimodal,Mutual information,Non-rigid,Prostate cancer,Registration,Thin plate splines},
 volume = {6915},
 id = {cd155eb8-8564-389b-ae50-9831d328c772},
 created = {2023-10-25T08:56:41.198Z},
 file_attached = {false},
 profile_id = {eaba325f-653b-3ee2-b960-0abd5146933e},
 last_modified = {2023-10-25T08:56:41.198Z},
 read = {false},
 starred = {false},
 authored = {true},
 confirmed = {false},
 hidden = {false},
 private_publication = {true},
 abstract = {Computer-aided diagnosis (CAD) systems for the detection of cancer in medical images require precise labeling of training data. For magnetic resonance (MR) imaging (MRI) of the prostate, training labels define the spatial extent of prostate cancer (CaP): the most common source for these labels is expert segmentations. When ancillary data such as whole mount histology (WMH) sections, which provide the gold standard for cancer ground truth, are available, the manual labeling of CaP can be improved by referencing WMH. However, manual segmentation is error prone, time consuming and not reproducible. Therefore, we present the use of multimodal image registration to automatically and accurately transcribe CaP from histology onto MRI following alignment. of the two modalities, in order to improve the quality of training data and hence classifier performance. We quantitatively demonstrate the superiority of this registration-based methodology by comparing its results to the manual CaP annotation of expert radiologists. Five supervised CAD classifiers were trained using the labels for CaP extent on MRI obtained by the expert and 4 different registration techniques. Two of the registration methods were affine schemes; one based on maximization of mutual information (MT) and the other method that we previously developed, Combined Feature Ensemble Mutual Information (COFEMT), which incorporates high-order statistical features for robust multimodal registration. Two non-rigid schemes were obtained by succeeding the two alline registration methods with an elastic deformation step using thin-plate splines (TPS). In the absence of definitive ground truth for CaP extent on MH1, classifier accuracy was evaluated against 7 ground truth surrogates obtained by different combinations of the expert and registration segmentations. For 26 multimodal MRI-WMH image pairs, all four registration methods produced a higher area under the receiver operating characteristic curve compared to that obtained from expert annotation. These results suggest that in the presence of additional multimodal image information one can obtain more accurate object annotations than achievable via expert delineation despite vast differences between modalities that hinder image registration.},
 bibtype = {inproceedings},
 author = {Chappelow, J. and Viswanath, S. and Monaco, J. and Rosen, M. and Tomaszewski, J. and Feldman, M. and Madabhushi, A.},
 doi = {10.1117/12.770703},
 booktitle = {Progress in Biomedical Optics and Imaging - Proceedings of SPIE}
}

Downloads: 0