var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https://raw.githubusercontent.com/sildomar/sildomar.github.io/master/files/mypapers.bib&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https://raw.githubusercontent.com/sildomar/sildomar.github.io/master/files/mypapers.bib&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https://raw.githubusercontent.com/sildomar/sildomar.github.io/master/files/mypapers.bib&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2019\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Semantic segmentation of multisensor remote sensing imagery with deep ConvNets and higher-order conditional random fields.\n \n \n \n \n\n\n \n Liu, Y.; Piramanayagam, S.; Monteiro, S. T.; and Saber, E.\n\n\n \n\n\n\n Journal of Applied Remote Sensing, 13(1): 1 – 23. 2019.\n \n\n\n\n
\n\n\n\n \n \n \"SemanticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 3 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{10.1117/1.JRS.13.016501,\nauthor = {Yansong Liu and Sankaranarayanan Piramanayagam and Sildomar T. Monteiro and Eli Saber},\ntitle = {{Semantic segmentation of multisensor remote sensing imagery with deep ConvNets and higher-order conditional random fields}},\nvolume = {13},\njournal = {Journal of Applied Remote Sensing},\nnumber = {1},\npublisher = {SPIE},\npages = {1 -- 23},\nabstract = {Aerial images acquired by multiple sensors provide comprehensive and diverse information of materials and objects within a surveyed area. The current use of pretrained deep convolutional neural networks (DCNNs) is usually constrained to three-band images (i.e., RGB) obtained from a single optical sensor. Additional spectral bands from a multiple sensor setup introduce challenges for the use of DCNN. We fuse the RGB feature information obtained from a deep learning framework with light detection and ranging (LiDAR) features to obtain semantic labeling. Specifically, we propose a decision-level multisensor fusion technique for semantic labeling of the very-high-resolution optical imagery and LiDAR data. Our approach first obtains initial probabilistic predictions from two different sources: one from a pretrained neural network fine-tuned on a three-band optical image, and another from a probabilistic classifier trained on LiDAR data. These two predictions are then combined as the unary potential using a higher-order conditional random field (CRF) framework, which resolves fusion ambiguities by exploiting the spatial–contextual information. We utilize graph cut to efficiently infer the final semantic labeling for our proposed higher-order CRF framework. Experiments performed on three benchmarking multisensor datasets demonstrate the performance advantages of our proposed method.},\nkeywords = {semantic segmentation, multisensor remote sensing, light detection and ranging, deep convolutional neural networks, conditional random fields},\nyear = {2019},\ndoi = {10.1117/1.JRS.13.016501},\nURL = {https://doi.org/10.1117/1.JRS.13.016501}\n}\n\n
\n
\n\n\n
\n Aerial images acquired by multiple sensors provide comprehensive and diverse information of materials and objects within a surveyed area. The current use of pretrained deep convolutional neural networks (DCNNs) is usually constrained to three-band images (i.e., RGB) obtained from a single optical sensor. Additional spectral bands from a multiple sensor setup introduce challenges for the use of DCNN. We fuse the RGB feature information obtained from a deep learning framework with light detection and ranging (LiDAR) features to obtain semantic labeling. Specifically, we propose a decision-level multisensor fusion technique for semantic labeling of the very-high-resolution optical imagery and LiDAR data. Our approach first obtains initial probabilistic predictions from two different sources: one from a pretrained neural network fine-tuned on a three-band optical image, and another from a probabilistic classifier trained on LiDAR data. These two predictions are then combined as the unary potential using a higher-order conditional random field (CRF) framework, which resolves fusion ambiguities by exploiting the spatial–contextual information. We utilize graph cut to efficiently infer the final semantic labeling for our proposed higher-order CRF framework. Experiments performed on three benchmarking multisensor datasets demonstrate the performance advantages of our proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral Super-Resolution with Optimized Bands.\n \n \n \n \n\n\n \n Gewali, U. B.; Monteiro, S. T.; and Saber, E.\n\n\n \n\n\n\n Remote Sensing, 11(14). 2019.\n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Article{rs11141648,\nAUTHOR = {Gewali, Utsav B. and Monteiro, Sildomar T. and Saber, Eli},\nTITLE = {Spectral Super-Resolution with Optimized Bands},\nJOURNAL = {Remote Sensing},\nVOLUME = {11},\nYEAR = {2019},\nNUMBER = {14},\nARTICLE-NUMBER = {1648},\nURL = {https://www.mdpi.com/2072-4292/11/14/1648},\nISSN = {2072-4292},\nABSTRACT = {Hyperspectral (HS) sensors sample reflectance spectrum in very high resolution, which allows us to examine material properties in very fine details. However, their widespread adoption has been hindered because they are very expensive. Reflectance spectra of real materials are high dimensional but sparse signals. By utilizing prior information about the statistics of real HS spectra, many previous studies have reconstructed HS spectra from multispectral (MS) signals (which can be obtained from cheaper, lower spectral resolution sensors). However, most of these techniques assume that the MS bands are known apriori and do not optimize the MS bands to produce more accurate reconstructions. In this paper, we propose a new end-to-end fully convolutional residual neural network architecture that simultaneously learns both the MS bands and the transformation to reconstruct HS spectra from MS signals by analyzing large quantity of HS data. The learned band can be implemented in hardware to obtain an MS sensor that collects data that is best to reconstruct HS spectra using the learned transformation. Using a diverse set of real-world datasets, we show how the proposed approach of optimizing MS bands along with the transformation can drastically increase the reconstruction accuracy. Additionally, we also investigate the prospects of using reconstructed HS spectra for land cover classification.},\nDOI = {10.3390/rs11141648}\n}\n\n
\n
\n\n\n
\n Hyperspectral (HS) sensors sample reflectance spectrum in very high resolution, which allows us to examine material properties in very fine details. However, their widespread adoption has been hindered because they are very expensive. Reflectance spectra of real materials are high dimensional but sparse signals. By utilizing prior information about the statistics of real HS spectra, many previous studies have reconstructed HS spectra from multispectral (MS) signals (which can be obtained from cheaper, lower spectral resolution sensors). However, most of these techniques assume that the MS bands are known apriori and do not optimize the MS bands to produce more accurate reconstructions. In this paper, we propose a new end-to-end fully convolutional residual neural network architecture that simultaneously learns both the MS bands and the transformation to reconstruct HS spectra from MS signals by analyzing large quantity of HS data. The learned band can be implemented in hardware to obtain an MS sensor that collects data that is best to reconstruct HS spectra using the learned transformation. Using a diverse set of real-world datasets, we show how the proposed approach of optimizing MS bands along with the transformation can drastically increase the reconstruction accuracy. Additionally, we also investigate the prospects of using reconstructed HS spectra for land cover classification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gaussian Processes for Vegetation Parameter Estimation from Hyperspectral Data with Limited Ground Truth.\n \n \n \n \n\n\n \n Gewali, U. B.; Monteiro, S. T.; and Saber, E.\n\n\n \n\n\n\n Remote Sensing, 11(13). 2019.\n \n\n\n\n
\n\n\n\n \n \n \"GaussianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Article{rs11131614,\nAUTHOR = {Gewali, Utsav B. and Monteiro, Sildomar T. and Saber, Eli},\nTITLE = {Gaussian Processes for Vegetation Parameter Estimation from Hyperspectral Data with Limited Ground Truth},\nJOURNAL = {Remote Sensing},\nVOLUME = {11},\nYEAR = {2019},\nNUMBER = {13},\nARTICLE-NUMBER = {1614},\nURL = {https://www.mdpi.com/2072-4292/11/13/1614},\nISSN = {2072-4292},\nABSTRACT = {An important application of airborne- and satellite-based hyperspectral imaging is the mapping of the spatial distribution of vegetation biophysical and biochemical parameters in an environment. Statistical models, such as Gaussian processes, have been very successful for modeling vegetation parameters from captured spectra, however their performance is highly dependent on the amount of available ground truth. This is a problem because it is generally expensive to obtain ground truth information due to difficulties and costs associated with sample collection and analysis. In this paper, we present two Gaussian processes based approaches for improving the accuracy of vegetation parameter retrieval when ground truth is limited. The first is the adoption of covariance functions based on well-established metrics, such as, spectral angle and spectral correlation, which are known to be better measures of similarity for spectral data owing to their resilience to spectral variabilities. The second is the joint modeling of related vegetation parameters by multitask Gaussian processes so that the prediction accuracy of the vegetation parameter of interest can be improved with the aid of related vegetation parameters for which a larger set of ground truth is available. We experimentally demonstrate the efficacy of the proposed methods against existing approaches on three real-world hyperspectral datasets and one synthetic dataset.},\nDOI = {10.3390/rs11131614}\n}\n\n
\n
\n\n\n
\n An important application of airborne- and satellite-based hyperspectral imaging is the mapping of the spatial distribution of vegetation biophysical and biochemical parameters in an environment. Statistical models, such as Gaussian processes, have been very successful for modeling vegetation parameters from captured spectra, however their performance is highly dependent on the amount of available ground truth. This is a problem because it is generally expensive to obtain ground truth information due to difficulties and costs associated with sample collection and analysis. In this paper, we present two Gaussian processes based approaches for improving the accuracy of vegetation parameter retrieval when ground truth is limited. The first is the adoption of covariance functions based on well-established metrics, such as, spectral angle and spectral correlation, which are known to be better measures of similarity for spectral data owing to their resilience to spectral variabilities. The second is the joint modeling of related vegetation parameters by multitask Gaussian processes so that the prediction accuracy of the vegetation parameter of interest can be improved with the aid of related vegetation parameters for which a larger set of ground truth is available. We experimentally demonstrate the efficacy of the proposed methods against existing approaches on three real-world hyperspectral datasets and one synthetic dataset.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Dual-Channel Densenet for Hyperspectral Image Classification.\n \n \n \n\n\n \n Yang, G.; Gewali, U. B.; Ientilucci, E.; Gartley, M.; and Monteiro, S. T.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium, pages 2595-2598, July 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{8517520, \nauthor={G. {Yang} and U. B. {Gewali} and E. {Ientilucci} and M. {Gartley} and S. T. {Monteiro}}, \nbooktitle={IEEE International Geoscience and Remote Sensing Symposium}, \ntitle={Dual-Channel Densenet for Hyperspectral Image Classification}, \nyear={2018}, \npages={2595-2598}, \nabstract={Deep neural networks provide deep extracted features for image classification. As a high dimension data, hyperspectral image (HSI) feature extraction is unlike an RGB image whose feature representation could not be simply generated in the spatial domain. To take full advantage of HSI, a dual-channel convolutional neural network (CNN) is applied, 1D convolution for the spectral domain and 2D convolution for spatial domain. For pixel-wise classification of HSI, in our network model, one-dimensional customized DenseNet is for extracting the hierarchical spectral features and another customized DenseNet is applied to extract the hierarchical spatial-related feature. Furthermore, we experimentally tuned the several widen factors and dense-net growth rates to evaluate the impact of hyper-parameter. To compare our proposed method with HSI classification methods, we test other three DNNs based method in two real-world HSI dataset. The result demonstrated our approach outperformed the state-of-art method.}, \nkeywords={convolution;feature extraction;feedforward neural nets;geophysical image processing;hyperspectral imaging;image classification;image representation;remote sensing;DNN;dual-channel Densenet;hierarchical spectral feature extraction;HSI pixel-wise classification;HSI feature extraction;feature representation;dense-net growth rates;hierarchical spatial-related feature;one-dimensional customized DenseNet;network model;spectral domain;dual-channel convolutional neural network;spatial domain;high dimension data;deep neural networks;hyperspectral image classification;Feature extraction;Hyperspectral imaging;Neural networks;Computer architecture;Training;Two dimensional displays;Hyperspectral image classification;Dual-channel DenseNet;spatial-spectral}, \ndoi={10.1109/IGARSS.2018.8517520}, \nISSN={2153-7003}, \nmonth={July}\n}\n\n
\n
\n\n\n
\n Deep neural networks provide deep extracted features for image classification. As a high dimension data, hyperspectral image (HSI) feature extraction is unlike an RGB image whose feature representation could not be simply generated in the spatial domain. To take full advantage of HSI, a dual-channel convolutional neural network (CNN) is applied, 1D convolution for the spectral domain and 2D convolution for spatial domain. For pixel-wise classification of HSI, in our network model, one-dimensional customized DenseNet is for extracting the hierarchical spectral features and another customized DenseNet is applied to extract the hierarchical spatial-related feature. Furthermore, we experimentally tuned the several widen factors and dense-net growth rates to evaluate the impact of hyper-parameter. To compare our proposed method with HSI classification methods, we test other three DNNs based method in two real-world HSI dataset. The result demonstrated our approach outperformed the state-of-art method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A tutorial on modelling and inference in undirected graphical models for hyperspectral image analysis.\n \n \n \n \n\n\n \n Gewali, U. B; and Monteiro, S. T\n\n\n \n\n\n\n International Journal of Remote Sensing,1–40. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{gewali2018tutorial,\n  title={A tutorial on modelling and inference in undirected graphical models for hyperspectral image analysis},\n  author={Gewali, Utsav B and Monteiro, Sildomar T},\n  journal={International Journal of Remote Sensing},\n   Doi = {10.1080/01431161.2018.1465614},\n  pages={1--40},\n  year={2018},\n  publisher={Taylor \\& Francis},\n  Abstract                 = {Undirected graphical models have been successfully used to jointly model the spatial and the spectral dependencies in earth observing hyperspectral images. They produce less noisy, smooth, and spatially coherent land-cover maps and give top accuracies on many datasets. Moreover, they can easily be combined with other state-of-the-art approaches, such as deep learning. This has made them an essential tool for remote-sensing researchers and practitioners. However, graphical models have not been easily accessible to the larger remote-sensing community as they are not discussed in standard remote-sensing textbooks and not included in the popular remote-sensing software and toolboxes. In this tutorial, we provide a theoretical introduction to Markov random fields and conditional random fields-based spatial–spectral classification for land-cover mapping along with a detailed step-by-step practical guide on applying these methods using freely available software. Furthermore, the discussed methods are benchmarked on four public hyperspectral datasets for a fair comparison among themselves and easy comparison with the vast number of methods in literature which use the same datasets. The source code necessary to reproduce all the results in the paper is published on-line to make it easier for the readers to apply these techniques to different remote-sensing problems.},\n  Url = {https://www.tandfonline.com/doi/full/10.1080/01431161.2018.1465614}\n}\n\n
\n
\n\n\n
\n Undirected graphical models have been successfully used to jointly model the spatial and the spectral dependencies in earth observing hyperspectral images. They produce less noisy, smooth, and spatially coherent land-cover maps and give top accuracies on many datasets. Moreover, they can easily be combined with other state-of-the-art approaches, such as deep learning. This has made them an essential tool for remote-sensing researchers and practitioners. However, graphical models have not been easily accessible to the larger remote-sensing community as they are not discussed in standard remote-sensing textbooks and not included in the popular remote-sensing software and toolboxes. In this tutorial, we provide a theoretical introduction to Markov random fields and conditional random fields-based spatial–spectral classification for land-cover mapping along with a detailed step-by-step practical guide on applying these methods using freely available software. Furthermore, the discussed methods are benchmarked on four public hyperspectral datasets for a fair comparison among themselves and easy comparison with the vast number of methods in literature which use the same datasets. The source code necessary to reproduce all the results in the paper is published on-line to make it easier for the readers to apply these techniques to different remote-sensing problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Machine learning based hyperspectral image analysis: a survey.\n \n \n \n \n\n\n \n Gewali, U. B; Monteiro, S. T; and Saber, E.\n\n\n \n\n\n\n arXiv preprint arXiv:1802.08701. 2018.\n \n\n\n\n
\n\n\n\n \n \n \"MachinePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@article{gewali2018machine,\n  title={Machine learning based hyperspectral image analysis: a survey},\n  author={Gewali, Utsav B and Monteiro, Sildomar T and Saber, Eli},\n  journal={arXiv preprint arXiv:1802.08701},\n  year={2018},\n  Url = {https://arxiv.org/abs/1802.08701}\n}\n\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Dense Semantic Labeling of Very-High-Resolution Aerial Imagery and LiDAR with Fully-Convolutional Neural Networks and Higher-Order CRFs.\n \n \n \n\n\n \n Liu, Y.; Monteiro, S. T.; Piramanayagam, S.; and Saber, E.\n\n\n \n\n\n\n In CVPR Workshop on Large Scale Computer Vision for Remote Sensing Imagery (EARTHVISION), 2017. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Liu2017,\n  author    = {Liu, Yansong and Monteiro, Sildomar T. and Piramanayagam, Sankaranarayanan and Saber, Eli},\n  title     = {{Dense Semantic Labeling of Very-High-Resolution Aerial Imagery and LiDAR with Fully-Convolutional Neural Networks and Higher-Order CRFs}},\n  booktitle = {CVPR Workshop on Large Scale Computer Vision for Remote Sensing Imagery (EARTHVISION)},\n  year      = {2017},\n  abstract  = {The increasing availability of very-high- resolution(VHR) aerial optical images as well as coregistered LiDAR data opens great opportunities for improving object-level dense semantic labeling of airborne remote sensing imagery. As a result, efficient and effective multisensor fusion techniques are demanded in order to fully exploit the two complementary data modalities.Recent efforts have been mostly devoted to exploring how to properly combine both sensor data using pre-trained deep convolutional neural networks (DCNNs) at the feature level. In this paper, we propose a decision-level fusion approach with a simpler architecture for the task of dense semantic labeling. Our proposed method first obtains two initial probabilistic labeling results from a fully-convolutional neural network and a simple classifier, e.g. logistic regression exploiting spectral channels andLiDAR data, respectively. These two outcomes are then combined within a higher-order conditional random field(CRF). The CRF inference will estimate the final dense semantic labeling results. Higher-order CRFs modeling helps to resolve the fusion ambiguities by explicitly using the spatial contexture information, which can be learned from the data itself. Based on the experiments on the ISPRS 2D semantic labeling Potsdam dataset, our proposed approach compares favorably or outperforms the state-of- the-art baseline methods that utilize feature level fusion.},\n}\n\n
\n
\n\n\n
\n The increasing availability of very-high- resolution(VHR) aerial optical images as well as coregistered LiDAR data opens great opportunities for improving object-level dense semantic labeling of airborne remote sensing imagery. As a result, efficient and effective multisensor fusion techniques are demanded in order to fully exploit the two complementary data modalities.Recent efforts have been mostly devoted to exploring how to properly combine both sensor data using pre-trained deep convolutional neural networks (DCNNs) at the feature level. In this paper, we propose a decision-level fusion approach with a simpler architecture for the task of dense semantic labeling. Our proposed method first obtains two initial probabilistic labeling results from a fully-convolutional neural network and a simple classifier, e.g. logistic regression exploiting spectral channels andLiDAR data, respectively. These two outcomes are then combined within a higher-order conditional random field(CRF). The CRF inference will estimate the final dense semantic labeling results. Higher-order CRFs modeling helps to resolve the fusion ambiguities by explicitly using the spatial contexture information, which can be learned from the data itself. Based on the experiments on the ISPRS 2D semantic labeling Potsdam dataset, our proposed approach compares favorably or outperforms the state-of- the-art baseline methods that utilize feature level fusion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Semantic segmentation of remote sensing data using Gaussian processes and higher-order CRFs.\n \n \n \n\n\n \n Liu, Y.; Monteiro, S. T.; and Saber, E.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2017. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Liu2017a,\n  author    = {Liu, Yansong and Monteiro, Sildomar T. and Saber, Eli},\n  title     = {{Semantic segmentation of remote sensing data using Gaussian processes and higher-order CRFs}},\n  booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  year      = {2017},\n  abstract  = {Automatic recognition for complex scenes from aerial images and other sensors data (e.g. LiDAR) has become a central interest in the remote sensing community. In this paper, we proposed a novel framework that utilizes higher order CRFs(HCRFs) to capture the spatial context for the RGB aerial images along with their co-registered LiDAR point clouds(DSMs). Our proposed CRFs framework exploits the spatial context in two levels. The first level encourages harmonic label co-existence within one segment generated by an unsupervised super-pixel algorithm. The second level takes into account the local object co-occurrence among neighboring segments. We then show that how to apply the move making graph cuts algorithm to perform effective inference for our proposed CRFs framework. Based on the experiments on a set of challenging images, our proposed higher order CRFs framework generated state-of-the-art semantic segmentation results for the aerial images.},\n}\n\n
\n
\n\n\n
\n Automatic recognition for complex scenes from aerial images and other sensors data (e.g. LiDAR) has become a central interest in the remote sensing community. In this paper, we proposed a novel framework that utilizes higher order CRFs(HCRFs) to capture the spatial context for the RGB aerial images along with their co-registered LiDAR point clouds(DSMs). Our proposed CRFs framework exploits the spatial context in two levels. The first level encourages harmonic label co-existence within one segment generated by an unsupervised super-pixel algorithm. The second level takes into account the local object co-occurrence among neighboring segments. We then show that how to apply the move making graph cuts algorithm to perform effective inference for our proposed CRFs framework. Based on the experiments on a set of challenging images, our proposed higher order CRFs framework generated state-of-the-art semantic segmentation results for the aerial images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Using Bayesian optimization to jointly tune the classifier and the random field for spatial-spectral hyperspectral classification.\n \n \n \n\n\n \n Gewali, U. B.; and Monteiro, S. T.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), 2017. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Gewali2017,\n  author    = {Gewali, Utsav B. and Monteiro, Sildomar T.},\n  title     = {{Using Bayesian optimization to jointly tune the classifier and the random field for spatial-spectral hyperspectral classification}},\n  booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  year      = {2017},\n  abstract  = {The framework consisting of a pixel-wise classification followed by a Markov random field has been very successful for spatial-spectral hyperspectral classification. While training such frameworks, the classifier and the Markov random field are generally tuned greedily one after another. However, better results could be obtained by tuning both of the components simultaneously with the objective of producing the best result at the end. This paper investigates the joint optimization of the hyperparameters of the classifier and the random field using Bayesian optimization. Experimental evaluation on the model comprising of a support vector machine classifier and a grid-structured Markov random field is provided. The results of the experiments, conducted on two independent datasets, suggest that the jointly tuned models can provide better accuracy.},\n}\n\n
\n
\n\n\n
\n The framework consisting of a pixel-wise classification followed by a Markov random field has been very successful for spatial-spectral hyperspectral classification. While training such frameworks, the classifier and the Markov random field are generally tuned greedily one after another. However, better results could be obtained by tuning both of the components simultaneously with the objective of producing the best result at the end. This paper investigates the joint optimization of the hyperparameters of the classifier and the random field using Bayesian optimization. Experimental evaluation on the model comprising of a support vector machine classifier and a grid-structured Markov random field is provided. The results of the experiments, conducted on two independent datasets, suggest that the jointly tuned models can provide better accuracy.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Vehicle detection from aerial color imagery and airborne LiDAR data.\n \n \n \n \n\n\n \n Liu, Y.; Monteiro, S. T.; and Saber, E.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 1384–1387, Beijing, China, jul 2016. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"VehiclePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Liu2016,\n  author    = {Liu, Yansong and Monteiro, Sildomar T. and Saber, Eli},\n  title     = {{Vehicle detection from aerial color imagery and airborne LiDAR data}},\n  booktitle = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  year      = {2016},\n  publisher = {IEEE},\n  month     = {jul},\n  isbn      = {978-1-5090-3332-4},\n  pages     = {1384--1387},\n  doi       = {10.1109/IGARSS.2016.7729354},\n  url       = {http://ieeexplore.ieee.org/document/7729354/},\n  abstract  = {Vehicle detection and recognition from aerial imagery provides useful information for local vehicle volume estimation and traffic monitoring. In this paper, we propose a method that accurately detects vehicles in urban environment using a probabilistic classification method followed by a refinement based on object segments. Both classification and segmentation methods make use of coregistered aerial RGB images and airborne LiDAR data. Pixel-wise vehicle probability estimation is achieved using Gaussian process (GP) classification and object segments are obtained by applying a gradient based segmentation algorithm (GSEG). The vehicle is then detected by refining the initial probability estimation with the following constraints: car size, statistical significance and 3D surface shape. Experimental results show our method achieves 90.8{\\%} precision and 93.7{\\%} recall, which outperforms the ones that only use size constraints.},\n  address   = {Beijing, China},\n}\n\n
\n
\n\n\n
\n Vehicle detection and recognition from aerial imagery provides useful information for local vehicle volume estimation and traffic monitoring. In this paper, we propose a method that accurately detects vehicles in urban environment using a probabilistic classification method followed by a refinement based on object segments. Both classification and segmentation methods make use of coregistered aerial RGB images and airborne LiDAR data. Pixel-wise vehicle probability estimation is achieved using Gaussian process (GP) classification and object segments are obtained by applying a gradient based segmentation algorithm (GSEG). The vehicle is then detected by refining the initial probability estimation with the following constraints: car size, statistical significance and 3D surface shape. Experimental results show our method achieves 90.8% precision and 93.7% recall, which outperforms the ones that only use size constraints.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Super Pixel Based Classification Using Conditional Random Fields for Hyperspectral Images.\n \n \n \n \n\n\n \n \n\n\n \n\n\n\n In IEEE International Conference on Image Processing (ICIP), pages 2202–2205, Phoenix, AZ, sep 2016. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"SuperPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
\n
\n\n\n
\n Classification plays a significant role in analyzing remotely sensed imagery. In order to obtain an optimized classier, following aspects are rather challenging: 1) complexity in dealing with the overwhelming amount of data information from an advanced high resolution hyperspectral imaging sensor; 2) difficulty in leveraging spectral and spatial information across the sensed wavelengths; 3) struggles in obtaining adequate dataset as in the same modalities with labeled ground truth in the training process. Therefore, we propose a novel classification approach to tackle these issues by utilizing probabilistic graphical model on super-pixel segmentation. This method is capable of compacting hyperspectral information efficiently which decreases computing complexity. Moreover, the employment of probabilistic graphical models that weighs the strong dependency in spatial and spectral neighbors improves accuracy. One of the most successful probabilistic graphical models is Conditional Random Fields (CRFs). Conventional methods utilize all spectral bands and assign the corresponding raw intensity values into the feature functions in CRFs and build the grid graph. These methods, however, require significant computational efforts and yield an ambiguous summary from the data. To mitigate these problems, we cooperate a non-linear kernel based classier to provide the meaningful probability features for CRFs and learn the non-grid graph from super pixel segmentation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel covariance function for predicting vegetation biochemistry from hyperspectral imagery with Gaussian processes.\n \n \n \n \n\n\n \n Gewali, U.; and Monteiro, S. T.\n\n\n \n\n\n\n In IEEE International Conference on Image Processing (ICIP), pages 2216–2220, Phoenix, AZ, sep 2016. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Gewali2016,\n  author    = {Gewali, Utsav and Monteiro, Sildomar T.},\n  title     = {{A novel covariance function for predicting vegetation biochemistry from hyperspectral imagery with Gaussian processes}},\n  booktitle = {IEEE International Conference on Image Processing (ICIP)},\n  year      = {2016},\n  publisher = {IEEE},\n  month     = {sep},\n  isbn      = {978-1-4673-9961-6},\n  pages     = {2216--2220},\n  doi       = {10.1109/ICIP.2016.7532752},\n  url       = {http://ieeexplore.ieee.org/document/7532752/},\n  abstract  = {Remotely extracting information about the biochemical properties of the materials in an environment from airborne- or satellite-based hyperspectral sensor has a variety of applications in forestry, agriculture, mining, environmental monitoring and space exploration. In this paper, we propose a new non-stationary covariance function, called exponential spectral angle mapper (ESAM) for predicting the biochemistry of vegetation from hyperspectral imagery using Gaussian processes. The proposed covariance function is based on the angle between the spectra, which is known to be a better measure of similarity for hyperspectral data due to its robustness to illumination variations. We demonstrate the efficacy of the proposed method with experiments on a real-world hy-perspectral dataset.},\n  address   = {Phoenix, AZ},\n}\n\n
\n
\n\n\n
\n Remotely extracting information about the biochemical properties of the materials in an environment from airborne- or satellite-based hyperspectral sensor has a variety of applications in forestry, agriculture, mining, environmental monitoring and space exploration. In this paper, we propose a new non-stationary covariance function, called exponential spectral angle mapper (ESAM) for predicting the biochemistry of vegetation from hyperspectral imagery using Gaussian processes. The proposed covariance function is based on the angle between the spectra, which is known to be a better measure of similarity for hyperspectral data due to its robustness to illumination variations. We demonstrate the efficacy of the proposed method with experiments on a real-world hy-perspectral dataset.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multitask learning of vegetation biochemistry from hyperspectral data.\n \n \n \n \n\n\n \n Gewali, U. B.; and Monteiro, S. T.\n\n\n \n\n\n\n In IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, oct 2016. \n \n\n\n\n
\n\n\n\n \n \n \"MultitaskPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Gewali2016a,\n  author        = {Gewali, Utsav B. and Monteiro, Sildomar T.},\n  title         = {{Multitask learning of vegetation biochemistry from hyperspectral data}},\n  booktitle     = {IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)},\n  year          = {2016},\n  month         = {oct},\n  eprint        = {1610.06987},\n  url           = {http://arxiv.org/abs/1610.06987},\n  abstract      = {Statistical models have been successful in accurately estimating the biochemical contents of vegetation from the reflectance spectra. However, their performance deteriorates when there is a scarcity of sizable amount of ground truth data for modeling the complex non-linear relationship occurring between the spectrum and the biochemical quantity. We propose a novel Gaussian process based multitask learning method for improving the prediction of a biochemical through the transfer of knowledge from the learned models for predicting related biochemicals. This method is most advantageous when there are few ground truth data for the biochemical of interest, but plenty of ground truth data for related biochemicals. The proposed multitask Gaussian process hypothesizes that the inter-relationship between the biochemical quantities is better modeled by using a combination of two or more covariance functions and inter-task correlation matrices. In the experiments, our method outperformed the current methods on two real-world datasets.},\n  address       = {Los Angeles, CA},\n  archiveprefix = {arXiv},\n  arxivid       = {1610.06987},\n}\n\n
\n
\n\n\n
\n Statistical models have been successful in accurately estimating the biochemical contents of vegetation from the reflectance spectra. However, their performance deteriorates when there is a scarcity of sizable amount of ground truth data for modeling the complex non-linear relationship occurring between the spectrum and the biochemical quantity. We propose a novel Gaussian process based multitask learning method for improving the prediction of a biochemical through the transfer of knowledge from the learned models for predicting related biochemicals. This method is most advantageous when there are few ground truth data for the biochemical of interest, but plenty of ground truth data for related biochemicals. The proposed multitask Gaussian process hypothesizes that the inter-relationship between the biochemical quantities is better modeled by using a combination of two or more covariance functions and inter-task correlation matrices. In the experiments, our method outperformed the current methods on two real-world datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral angle based unary energy functions for spatial-spectral hyperspectral classification using Markov random fields.\n \n \n \n \n\n\n \n Gewali, U. B.; and Monteiro, S. T.\n\n\n \n\n\n\n In IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, oct 2016. \n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Gewali2016b,\n  author        = {Gewali, Utsav B. and Monteiro, Sildomar T.},\n  title         = {{Spectral angle based unary energy functions for spatial-spectral hyperspectral classification using Markov random fields}},\n  booktitle     = {IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)},\n  year          = {2016},\n  month         = {oct},\n  eprint        = {1610.06985},\n  url           = {http://arxiv.org/abs/1610.06985},\n  abstract      = {In this paper, we propose and compare two spectral angle based approaches for spatial-spectral classification. Our methods use the spectral angle to generate unary energies in a grid-structured Markov random field defined over the pixel labels of a hyperspectral image. The first approach is to use the exponential spectral angle mapper (ESAM) kernel/covariance function, a spectral angle based function, with the support vector machine and the Gaussian process classifier. The second approach is to directly use the minimum spectral angle between the test pixel and the training pixels as the unary energy. We compare the proposed methods with the state-of-the-art Markov random field methods that use support vector machines and Gaussian processes with squared exponential kernel/covariance function. In our experiments with two datasets, it is seen that using minimum spectral angle as unary energy produces better or comparable results to the existing methods at a smaller running time.},\n  address       = {Los Angeles, CA},\n  archiveprefix = {arXiv},\n  arxivid       = {1610.06985},\n}\n\n
\n
\n\n\n
\n In this paper, we propose and compare two spectral angle based approaches for spatial-spectral classification. Our methods use the spectral angle to generate unary energies in a grid-structured Markov random field defined over the pixel labels of a hyperspectral image. The first approach is to use the exponential spectral angle mapper (ESAM) kernel/covariance function, a spectral angle based function, with the support vector machine and the Gaussian process classifier. The second approach is to directly use the minimum spectral angle between the test pixel and the training pixels as the unary energy. We compare the proposed methods with the state-of-the-art Markov random field methods that use support vector machines and Gaussian processes with squared exponential kernel/covariance function. In our experiments with two datasets, it is seen that using minimum spectral angle as unary energy produces better or comparable results to the existing methods at a smaller running time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gaussian Processes for Object Detection in High Resolution Remote Sensing Images.\n \n \n \n \n\n\n \n Liang, Y.; Monteiro, S. T.; and Saber, E. S.\n\n\n \n\n\n\n In 2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 998–1003, dec 2016. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"GaussianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Liang2016,\n  author    = {Liang, Yilong and Monteiro, Sildomar T. and Saber, Eli S.},\n  title     = {Gaussian Processes for Object Detection in High Resolution Remote Sensing Images},\n  booktitle = {2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)},\n  year      = {2016},\n  publisher = {IEEE},\n  month     = {dec},\n  isbn      = {978-1-5090-6167-9},\n  pages     = {998--1003},\n  doi       = {10.1109/ICMLA.2016.0180},\n  url       = {http://ieeexplore.ieee.org/document/7838284/},\n  abstract  = {Object detection in high resolution remote sensing images is a crucial yet challenging problem for many applications. With the development of satellite and sensor technologies, remote sensing images attain very high spatial resolution, giving rise to the employment of many computer vision algorithms. Therefore, the object detection is usually formalized as a supervised classification task. In this paper, we propose to apply the Gaussian process (GP) classification algorithm for our detection problem. Among different classifiers, the GP classifier is a Bayesian classification method that is able to make estimations in a probabilistic way. To demonstrate the performance of the proposed approach, we experiment the proposed framework with different feature extraction schemes and classification methods. We carry out a cross-validation experiment over an image dataset that consists of objects and non-objects to train an object detector, and apply the trained detector in an unobserved image scene to search for the objects of interest. Our results show that the GP classifier is competitive to support vector machines (SVM), which is considered state-of-the-art.},\n}\n\n
\n
\n\n\n
\n Object detection in high resolution remote sensing images is a crucial yet challenging problem for many applications. With the development of satellite and sensor technologies, remote sensing images attain very high spatial resolution, giving rise to the employment of many computer vision algorithms. Therefore, the object detection is usually formalized as a supervised classification task. In this paper, we propose to apply the Gaussian process (GP) classification algorithm for our detection problem. Among different classifiers, the GP classifier is a Bayesian classification method that is able to make estimations in a probabilistic way. To demonstrate the performance of the proposed approach, we experiment the proposed framework with different feature extraction schemes and classification methods. We carry out a cross-validation experiment over an image dataset that consists of objects and non-objects to train an object detector, and apply the trained detector in an unobserved image scene to search for the objects of interest. Our results show that the GP classifier is competitive to support vector machines (SVM), which is considered state-of-the-art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Transfer learning for high resolution aerial image classification.\n \n \n \n\n\n \n Liang, Y.; Monteiro, S. T.; and Saber, E.\n\n\n \n\n\n\n In Applied Imagery Pattern Recognition Annual Workshop (AIPR), Washington DC, 2016. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Liang2016a,\n  author    = {Liang, Yilong and Monteiro, Sildomar T. and Saber, Eli},\n  title     = {Transfer learning for high resolution aerial image classification},\n  booktitle = {Applied Imagery Pattern Recognition Annual Workshop (AIPR)},\n  year      = {2016},\n  abstract  = {With rapid developments in satellite and sensor technologies, increasing amount of high spatial resolution aerial images have become available. Classification of these images are important for many remote sensing image understanding tasks, such as image retrieval and object detection. Meanwhile, image classification in the computer vision field is revolutionized with recent popularity of the convolutional neural networks (CNN), based on which the state-of-the-art classification results are achieved. Therefore, the idea of applying the CNN for high resolution aerial image classification is straightforward. However, it is not trivial mainly because the amount of labeled images in remote sensing for training a deep neural network is limited. As a result, transfer learning techniques were adopted for this problem, where the CNN used for the classification problem is pre-trained on a larger dataset beforehand. In this paper, we propose a specific fine-tuning strategy that results in better CNN models for aerial image classification. Extensive experiments were carried out using the proposed approach with different CNN architectures. Our proposed method shows competitive results compared to the existing approaches, indicating the superiority of the proposed fine-tuning algorithm.},\n  address   = {Washington DC},\n}\n\n
\n
\n\n\n
\n With rapid developments in satellite and sensor technologies, increasing amount of high spatial resolution aerial images have become available. Classification of these images are important for many remote sensing image understanding tasks, such as image retrieval and object detection. Meanwhile, image classification in the computer vision field is revolutionized with recent popularity of the convolutional neural networks (CNN), based on which the state-of-the-art classification results are achieved. Therefore, the idea of applying the CNN for high resolution aerial image classification is straightforward. However, it is not trivial mainly because the amount of labeled images in remote sensing for training a deep neural network is limited. As a result, transfer learning techniques were adopted for this problem, where the CNN used for the classification problem is pre-trained on a larger dataset beforehand. In this paper, we propose a specific fine-tuning strategy that results in better CNN models for aerial image classification. Extensive experiments were carried out using the proposed approach with different CNN architectures. Our proposed method shows competitive results compared to the existing approaches, indicating the superiority of the proposed fine-tuning algorithm.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Adaptive sampling applied to blast-hole drilling in surface mining .\n \n \n \n \n\n\n \n Ahsan, N.; Scheding, S.; Monteiro, S. T.; Leung, R.; McHugh, C.; and Robinson, D.\n\n\n \n\n\n\n International Journal of Rock Mechanics and Mining Sciences , 75(0): 244–255. Mar. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{Ahsan2015,\n  Title                    = {Adaptive sampling applied to blast-hole drilling in surface mining },\n  Author                   = {Nasir Ahsan and Steven Scheding and Sildomar T. Monteiro and Raymond Leung and Charles McHugh and Danielle Robinson},\n  Journal                  = {International Journal of Rock Mechanics and Mining Sciences },\n  Year                     = {2015},\n\n  Month                    = {Mar.},\n  Number                   = {0},\n  Pages                    = {244--255},\n  Volume                   = {75},\n\n  Abstract                 = {Abstract This paper describes an application of adaptive sampling to geology modeling with a view of improving the operational cost and efficiency in certain surface mining applications. The objectives are to minimize the number of blast holes drilled into, and the accidental penetrations of, the geological boundary of interest. These objectives are driven by economic considerations as the cost is, firstly, directly proportional to the number of holes drilled and secondly, related to the efficiency of target material recovery associated with excavation and blast damage. The problem formulation is therefore motivated by the incentive to learn more about the lithology and drill less. The principal challenge with building an accurate surface model is that the sedimentary rock mass is coarsely sampled by drilling exploration holes which are typically a long distance apart. Thus, interpolation does not capture adequately local changes in the underlying geology. With the recent advent of consistent and reliable real-time identification of geological boundaries under field conditions using measure-while-drilling data, we pose the local model estimation problem in an adaptive sampling framework. The proposed sampling strategy consists of two phases. First, blast-holes are drilled to the geological boundary of interest, and their locations are adaptively selected to maximize utility in terms of the incremental improvement that can be made to the evolving spatial model. The second phase relies on the predicted geology and drills to an expert based pre-specified standoff distance from the geological boundary of interest, to optimize blasting and minimize its damage. Using data acquired from a coal mine survey bench in Australia, we demonstrate that adaptively choosing blast-holes in Phase 1 can minimize the total number of holes drilled to the top of the coal seam, as opposed to random hole selection, whilst optimizing blasting by maintaining a reasonable compromise in the error in the stopping distances from the seam. We also show that adaptive sampling requires, for accurate estimation, only a fraction of the holes that were initially drilled for this particular dataset.},\n  Doi                      = {10.1016/j.ijrmms.2015.01.009},\n  Gsid                     = {F1b5ZUV5XREC},\n  ISSN                     = {1365-1609},\n  Keywords                 = {Measure-while-drilling, Surface mining, Adaptive sampling, Blast-hole design optimization, Geological boundary detection, Mine automation},\n  Owner                    = {stmeee},\n  Timestamp                = {2015.04.29},\n  Url                      = {http://www.sciencedirect.com/science/article/pii/S1365160915000167}\n}\n\n
\n
\n\n\n
\n Abstract This paper describes an application of adaptive sampling to geology modeling with a view of improving the operational cost and efficiency in certain surface mining applications. The objectives are to minimize the number of blast holes drilled into, and the accidental penetrations of, the geological boundary of interest. These objectives are driven by economic considerations as the cost is, firstly, directly proportional to the number of holes drilled and secondly, related to the efficiency of target material recovery associated with excavation and blast damage. The problem formulation is therefore motivated by the incentive to learn more about the lithology and drill less. The principal challenge with building an accurate surface model is that the sedimentary rock mass is coarsely sampled by drilling exploration holes which are typically a long distance apart. Thus, interpolation does not capture adequately local changes in the underlying geology. With the recent advent of consistent and reliable real-time identification of geological boundaries under field conditions using measure-while-drilling data, we pose the local model estimation problem in an adaptive sampling framework. The proposed sampling strategy consists of two phases. First, blast-holes are drilled to the geological boundary of interest, and their locations are adaptively selected to maximize utility in terms of the incremental improvement that can be made to the evolving spatial model. The second phase relies on the predicted geology and drills to an expert based pre-specified standoff distance from the geological boundary of interest, to optimize blasting and minimize its damage. Using data acquired from a coal mine survey bench in Australia, we demonstrate that adaptively choosing blast-holes in Phase 1 can minimize the total number of holes drilled to the top of the coal seam, as opposed to random hole selection, whilst optimizing blasting by maintaining a reasonable compromise in the error in the stopping distances from the seam. We also show that adaptive sampling requires, for accurate estimation, only a fraction of the holes that were initially drilled for this particular dataset.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of hyperspectral images based on conditional random fields.\n \n \n \n \n\n\n \n Hu, Y.; Saber, E.; Monteiro, S. T.; Cahill, N. D.; and Messinger, D. W.\n\n\n \n\n\n\n In IS&T/SPIE Electronic Imaging, volume 9405, pages 940510–940518, San Francisco, CA, Feb. 2015. SPIE\n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Hu2015,\n  Title                    = {Classification of hyperspectral images based on conditional random fields},\n  Author                   = {Hu, Yang and Saber, Eli and Monteiro, Sildomar T. and Cahill, Nathan D. and Messinger, David W.},\n  Booktitle                = {IS\\&T/SPIE Electronic Imaging},\n  Year                     = {2015},\n\n  Address                  = {San Francisco, CA},\n  Month                    = {Feb.},\n  Organization             = {SPIE},\n  Pages                    = {940510--940518},\n  Volume                   = {9405},\n\n  Abstract                 = {A significant increase in the availability of high resolution hyperspectral images has led to the need for developing pertinent techniques in image analysis, such as classification. Hyperspectral images that are correlated spatially and spectrally provide ample information across the bands to benefit this purpose. Conditional Random Fields (CRFs) are discriminative models that carry several advantages over conventional techniques: no requirement of the independence assumption for observations, flexibility in defining local and pairwise potentials, and an independence between the modules of feature selection and parameter leaning. In this paper we present a framework for classifying remotely sensed imagery based on CRFs. We apply a Support Vector Machine (SVM) classifier to raw remotely sensed imagery data in order to generate more meaningful feature potentials to the CRFs model. This approach produces promising results when tested with publicly available AVIRIS Indian Pine imagery.},\n  Doi                      = {10.1117/12.2083374},\n  Gsid                     = {LgRImbQfgY4C},\n  Journal                  = {Proc. SPIE},\n  Owner                    = {stmeee},\n  Timestamp                = {2015.04.29},\n  Url                      = {http://proceedings.spiedigitallibrary.org/proceeding.aspx?articleid=2191108}\n}\n\n
\n
\n\n\n
\n A significant increase in the availability of high resolution hyperspectral images has led to the need for developing pertinent techniques in image analysis, such as classification. Hyperspectral images that are correlated spatially and spectrally provide ample information across the bands to benefit this purpose. Conditional Random Fields (CRFs) are discriminative models that carry several advantages over conventional techniques: no requirement of the independence assumption for observations, flexibility in defining local and pairwise potentials, and an independence between the modules of feature selection and parameter leaning. In this paper we present a framework for classifying remotely sensed imagery based on CRFs. We apply a Support Vector Machine (SVM) classifier to raw remotely sensed imagery data in order to generate more meaningful feature potentials to the CRFs model. This approach produces promising results when tested with publicly available AVIRIS Indian Pine imagery.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Editorial: Special issue on Alternative Sensing Techniques for Robot Perception.\n \n \n \n \n\n\n \n Peynot, T.; Monteiro, S. T.; Kelly, A.; and Devy, M.\n\n\n \n\n\n\n Journal of Field Robotics, 32(1): 1–2. Jan. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Editorial:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Article{Peynot2015,\n  Title                    = {Editorial: Special issue on Alternative Sensing Techniques for Robot Perception},\n  Author                   = {Peynot, Thierry and Monteiro, Sildomar T. and Kelly, Alonzo and Devy, Michel},\n  Journal                  = {Journal of Field Robotics},\n  Year                     = {2015},\n\n  Month                    = {Jan.},\n  Number                   = {1},\n  Pages                    = {1--2},\n  Volume                   = {32},\n\n  Abstract                 = {Perception based on the most commonly used sensors such as visual cameras and laser range finders (or LIDARs) has enabled major achievements in field robotics. Stanley and Boss won DARPA Grand Challenges using almost exclusively cameras and LIDARs (albeit many of them). The Opportunity rover, using only cameras for navigation, recently broke the off-Earth driving record, with a distance of more than 40km. However, the capabilities of state-of-the-art robotic systems have remained largely restricted by the intrinsic limitations of these sensors. Boss' conquest of the DARPA Urban Challenge was temporarily compromised by the detection of a dust cloud by one of its LIDARs. Opportunity's glorious run came close to a premature ending when its cameras proved insufficient to predict that its wheels would get stuck in a sand dune that looked identical to many traversed thus far. These traditional sensors have often become insufficient when developing autonomous platforms that should operate for extended periods of time and in ever more challenging environments and conditions, e.g. limited visibility due to dust or fog. Complex and varied robotic applications may also require richer environment models than can be generated with only these traditional sensors. All these considerations have led to increased interest in alternative sensing modalities in recent years. To sense the environment, alternative sensors use physical principles that are distinct from those used by traditional robotic sensors, and may operate at various electromagnetic frequencies outside the visible spectrum. Examples include radars of various types, thermal cameras, hyperspectral and multispectral cameras, and sonars. The use of these sensors for robotic perception has allowed robots to operate in conditions that were previously infeasible and had to be avoided. Robots can navigate through smoke or at night with infrared imaging, or track roads in a dust storm using millimeter-wave radars. This sensory capability has also opened up a number of new robotic applications (e.g., automatic geological analysis using hyperspectral cameras, sonar mapping of oil rigs, eddy current inspection of nuclear boilers), in which alternative sensors are often combined with traditional robotic sensors to provide enhanced discrimination power and higher perception integrity. This special issue presents outstanding results on novel robotic perception concepts motivated by the use of alternative sensing in challenging applications and environments.},\n  Doi                      = {10.1002/rob.21564},\n  Gsid                     = {Ak0FvsSvgGUC},\n  ISSN                     = {1556-4967},\n  Owner                    = {stmeee},\n  Timestamp                = {2015.04.29},\n  Url                      = {http://onlinelibrary.wiley.com/doi/10.1002/rob.21564/epdf}\n}\n\n
\n
\n\n\n
\n Perception based on the most commonly used sensors such as visual cameras and laser range finders (or LIDARs) has enabled major achievements in field robotics. Stanley and Boss won DARPA Grand Challenges using almost exclusively cameras and LIDARs (albeit many of them). The Opportunity rover, using only cameras for navigation, recently broke the off-Earth driving record, with a distance of more than 40km. However, the capabilities of state-of-the-art robotic systems have remained largely restricted by the intrinsic limitations of these sensors. Boss' conquest of the DARPA Urban Challenge was temporarily compromised by the detection of a dust cloud by one of its LIDARs. Opportunity's glorious run came close to a premature ending when its cameras proved insufficient to predict that its wheels would get stuck in a sand dune that looked identical to many traversed thus far. These traditional sensors have often become insufficient when developing autonomous platforms that should operate for extended periods of time and in ever more challenging environments and conditions, e.g. limited visibility due to dust or fog. Complex and varied robotic applications may also require richer environment models than can be generated with only these traditional sensors. All these considerations have led to increased interest in alternative sensing modalities in recent years. To sense the environment, alternative sensors use physical principles that are distinct from those used by traditional robotic sensors, and may operate at various electromagnetic frequencies outside the visible spectrum. Examples include radars of various types, thermal cameras, hyperspectral and multispectral cameras, and sonars. The use of these sensors for robotic perception has allowed robots to operate in conditions that were previously infeasible and had to be avoided. Robots can navigate through smoke or at night with infrared imaging, or track roads in a dust storm using millimeter-wave radars. This sensory capability has also opened up a number of new robotic applications (e.g., automatic geological analysis using hyperspectral cameras, sonar mapping of oil rigs, eddy current inspection of nuclear boilers), in which alternative sensors are often combined with traditional robotic sensors to provide enhanced discrimination power and higher perception integrity. This special issue presents outstanding results on novel robotic perception concepts motivated by the use of alternative sensing in challenging applications and environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-dimensional representations of hyperspectral data for use in CRF-based classification.\n \n \n \n \n\n\n \n Hu, Y.; Cahill, N. D.; Monteiro, S. T.; Saber, E.; and Messinger, D. W.\n\n\n \n\n\n\n In Bruzzone, L., editor(s), Image and Signal Processing for Remote Sensing XXI, oct 2015. SPIE-Intl Soc Optical Eng\n \n\n\n\n
\n\n\n\n \n \n \"Low-dimensionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{HuCahillMonteiroEtAl2015,\n  author    = {Yang Hu and Nathan D. Cahill and Sildomar T. Monteiro and Eli Saber and David W. Messinger},\n  title     = {Low-dimensional representations of hyperspectral data for use in {CRF}-based classification},\n  booktitle = {Image and Signal Processing for Remote Sensing {XXI}},\n  year      = {2015},\n  editor    = {Lorenzo Bruzzone},\n  publisher = {{SPIE}-Intl Soc Optical Eng},\n  month     = {oct},\n  doi       = {10.1117/12.2195229},\n  url       = {http://dx.doi.org/10.1117/12.2195229},\n  abstract  = {Probabilistic graphical models have strong potential for use in hyperspectral image classification. One important class of probabilisitic graphical models is the Conditional Random Field (CRF), which has distinct advantages over traditional Markov Random Fields (MRF), including: no independence assumption is made over the observation, and local and pairwise potential features can be defined with flexibility. Conventional methods for hyperspectral image classification utilize all spectral bands and assign the corresponding raw intensity values into the feature functions in CRFs. These methods, however, require significant computational efforts and yield an ambiguous summary from the data. To mitigate these problems, we propose a novel processing method for hyperspectral image classification by incorporating a lower dimensional representation into the CRFs. In this paper, we use representations based on three types of graph-based dimensionality reduction algorithms: Laplacian Eigemaps (LE), Spatial-Spectral Schroedinger Eigenmaps (SSSE), and Local Linear Embedding (LLE), and we investigate the impact of choice of representation on the subsequent CRF-based classifications.},\n  gsid      = {4X0JR2_MtJMC},\n}\n\n
\n
\n\n\n
\n Probabilistic graphical models have strong potential for use in hyperspectral image classification. One important class of probabilisitic graphical models is the Conditional Random Field (CRF), which has distinct advantages over traditional Markov Random Fields (MRF), including: no independence assumption is made over the observation, and local and pairwise potential features can be defined with flexibility. Conventional methods for hyperspectral image classification utilize all spectral bands and assign the corresponding raw intensity values into the feature functions in CRFs. These methods, however, require significant computational efforts and yield an ambiguous summary from the data. To mitigate these problems, we propose a novel processing method for hyperspectral image classification by incorporating a lower dimensional representation into the CRFs. In this paper, we use representations based on three types of graph-based dimensionality reduction algorithms: Laplacian Eigemaps (LE), Spatial-Spectral Schroedinger Eigenmaps (SSSE), and Local Linear Embedding (LLE), and we investigate the impact of choice of representation on the subsequent CRF-based classifications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An approach for combining airborne LiDAR and high-resolution aerial color imagery using Gaussian processes.\n \n \n \n \n\n\n \n Liu, Y.; Monteiro, S. T.; and Saber, E.\n\n\n \n\n\n\n In Bruzzone, L., editor(s), Image and Signal Processing for Remote Sensing XXI, oct 2015. SPIE-Intl Soc Optical Eng\n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{LiuMonteiroSaber2015,\n  author    = {Yansong Liu and Sildomar T. Monteiro and Eli Saber},\n  title     = {An approach for combining airborne {LiDAR} and high-resolution aerial color imagery using {Gauss}ian processes},\n  booktitle = {Image and Signal Processing for Remote Sensing {XXI}},\n  year      = {2015},\n  editor    = {Lorenzo Bruzzone},\n  publisher = {{SPIE}-Intl Soc Optical Eng},\n  month     = {oct},\n  doi       = {10.1117/12.2194096},\n  url       = {http://dx.doi.org/10.1117/12.2194096},\n  abstract  = {Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.},\n  gsid      = {sJsF-0ZLhtgC},\n}\n\n
\n
\n\n\n
\n Changes in vegetation cover, building construction, road network and traffic conditions caused by urban expansion affect the human habitat as well as the natural environment in rapidly developing cities. It is crucial to assess these changes and respond accordingly by identifying man-made and natural structures with accurate classification algorithms. With the increase in use of multi-sensor remote sensing systems, researchers are able to obtain a more complete description of the scene of interest. By utilizing multi-sensor data, the accuracy of classification algorithms can be improved. In this paper, we propose a method for combining 3D LiDAR point clouds and high-resolution color images to classify urban areas using Gaussian processes (GP). GP classification is a powerful non-parametric classification method that yields probabilistic classification results. It makes predictions in a way that addresses the uncertainty of real world. In this paper, we attempt to identify man-made and natural objects in urban areas including buildings, roads, trees, grass, water and vehicles. LiDAR features are derived from the 3D point clouds and the spatial and color features are extracted from RGB images. For classification, we use the Laplacian approximation for GP binary classification on the new combined feature space. The multiclass classification has been implemented by using one-vs-all binary classification strategy. The result of applying support vector machines (SVMs) and logistic regression (LR) classifier is also provided for comparison. Our experiments show a clear improvement of classification results by using the two sensors combined instead of each sensor separately. Also we found the advantage of applying GP approach to handle the uncertainty in classification result without compromising accuracy compared to SVM, which is considered as the state-of-the-art classification method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust stock value prediction using support vector machines with particle swarm optimization.\n \n \n \n \n\n\n \n \n\n\n \n\n\n\n In 2015 IEEE Congress on Evolutionary Computation (CEC), may 2015. Institute of Electrical & Electronics Engineers (IEEE)\n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
\n
\n\n\n
\n Attempting to understand and characterize trends in the stock market has been the goal of numerous market analysts, but these patterns are often difficult to detect until after they have been firmly established. Recently, attempts have been made by both large companies and individual investors to utilize intelligent analysis and trading algorithms to identify potential trends before they occur in the market environment, effectively predicting future stock values and outlooks. In this paper, three different classification algorithms will be compared for the purposes of maximizing capital while minimizing risk to the investor. The main contribution of this work is a demonstrated improvement over other prediction methods using machine learning; the results show that tuning support vector machine parameters with particle swarm optimization leads to highly accurate (approximately 95%) and robust stock forecasting for historical datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Comparing inference methods for conditional random fields for hyperspectral image classification.\n \n \n \n\n\n \n Hu, Y.; Monteiro, S. T.; and Saber, E.\n\n\n \n\n\n\n In Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), pages 1–4, 2015. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Hu2015,\n  author    = {Hu, Yang and Monteiro, Sildomar T. and Saber, Eli},\n  title     = {{Comparing inference methods for conditional random fields for hyperspectral image classification}},\n  booktitle = {Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)},\n  year      = {2015},\n  pages     = {1--4},\n  abstract  = {Classification of hyperspectral images is an important method for various object-based-analysis applications in remote sensing. We propose a two-level learning algorithm combining Support Vector Machines (SVMs) and Conditional Random Fields (CRFs) to achieve accurate classification of hyperspectral images. The hyperspectral data is initially processed by SVMs into a local, pixel based classification which serves as the observations in the CRFs model for generating unary and pairwise potentials. Three inference algorithms: mean field, tree-reweighted belief propagation, and loopy belief propagation are compared in the CRF inference procedure. This two step algorithm is tested with the publicly available AVIRIS Indian Pines data set, and results from the three listed inference methods are discussed.},\n}\n\n
\n
\n\n\n
\n Classification of hyperspectral images is an important method for various object-based-analysis applications in remote sensing. We propose a two-level learning algorithm combining Support Vector Machines (SVMs) and Conditional Random Fields (CRFs) to achieve accurate classification of hyperspectral images. The hyperspectral data is initially processed by SVMs into a local, pixel based classification which serves as the observations in the CRFs model for generating unary and pairwise potentials. Three inference algorithms: mean field, tree-reweighted belief propagation, and loopy belief propagation are compared in the CRF inference procedure. This two step algorithm is tested with the publicly available AVIRIS Indian Pines data set, and results from the three listed inference methods are discussed.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Consistency of Measurements of Wavelength Position From Hyperspectral Imagery: Use of the Ferric Iron Crystal Field Absorption at 900 nm as an Indicator of Mineralogy.\n \n \n \n \n\n\n \n Murphy, R. J.; Schneider, S.; and Monteiro, S. T.\n\n\n \n\n\n\n IEEE Transactions on Geoscience and Remote Sensing, 52(5): 2843–2857. May 2014.\n \n\n\n\n
\n\n\n\n \n \n \"ConsistencyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{Murphy2013,\n  Title                    = {Consistency of Measurements of Wavelength Position From Hyperspectral Imagery: Use of the Ferric Iron Crystal Field Absorption at 900 nm as an Indicator of Mineralogy},\n  Author                   = {Murphy, Richard J. and Schneider, Sven and Monteiro, Sildomar T.},\n  Journal                  = {IEEE Transactions on Geoscience and Remote Sensing},\n  Year                     = {2014},\n\n  Month                    = {May},\n  Number                   = {5},\n  Pages                    = {2843--2857},\n  Volume                   = {52},\n\n  Abstract                 = {Several environmental and sensor effects make the determination of the wavelength position of absorption features in the visible near infrared (VNIR) (400-1200 nm) from hyperspectral imagery more difficult than from nonimaging spectrometers. To evaluate this, we focus on the ferric iron crystal field absorption, located at about 900 nm (F900), because it is impacted by both environmental and sensor effects. The consistency with which the wavelength position of F900 can be determined from imagery acquired in laboratory and field settings is evaluated under artificial and natural illumination, respectively. The wavelength position of F900, determined from laboratory imagery, is also evaluated as an indicator of the proportion of goethite in mixtures of crushed rock. Results are compared with those from a high-resolution field spectrometer. Images describing the wavelength position of F900 showed large amounts of spatial variability and contained an artifact-a consistent shift in the wavelength position of F900 to longer wavelengths. These effects were greatly reduced or removed when wavelength position was determined from a polynomial fit to the data, enabling wavelength position to be used to map hematite and goethite in samples of ore and on a vertical surface (a mine face). The wavelength position of F900 from a polynomial fit was strongly positively correlated with the proportion of goethite (R2=0.97). Taken together, these findings indicate that the wavelength position of absorption features from VNIR imagery should be determined from a polynomial (or equivalent) fit to the original data and not from the original data themselves.},\n  Doi                      = {10.1109/TGRS.2013.2266672},\n  Gsid                     = {L8Ckcad2t8MC},\n  Keywords                 = {Geology, hyperspectral sensors, image classification, infrared spectroscopy, minerals, mining industry, polynomials, remote sensing, signal processing, spectral analysis, terrain mapping},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Murphy_TGRS_2013.pdf}\n}\n\n
\n
\n\n\n
\n Several environmental and sensor effects make the determination of the wavelength position of absorption features in the visible near infrared (VNIR) (400-1200 nm) from hyperspectral imagery more difficult than from nonimaging spectrometers. To evaluate this, we focus on the ferric iron crystal field absorption, located at about 900 nm (F900), because it is impacted by both environmental and sensor effects. The consistency with which the wavelength position of F900 can be determined from imagery acquired in laboratory and field settings is evaluated under artificial and natural illumination, respectively. The wavelength position of F900, determined from laboratory imagery, is also evaluated as an indicator of the proportion of goethite in mixtures of crushed rock. Results are compared with those from a high-resolution field spectrometer. Images describing the wavelength position of F900 showed large amounts of spatial variability and contained an artifact-a consistent shift in the wavelength position of F900 to longer wavelengths. These effects were greatly reduced or removed when wavelength position was determined from a polynomial fit to the data, enabling wavelength position to be used to map hematite and goethite in samples of ore and on a vertical surface (a mine face). The wavelength position of F900 from a polynomial fit was strongly positively correlated with the proportion of goethite (R2=0.97). Taken together, these findings indicate that the wavelength position of absorption features from VNIR imagery should be determined from a polynomial (or equivalent) fit to the original data and not from the original data themselves.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Mapping Layers of Clay in a Vertical Geological Surface Using Hyperspectral Imagery: Variability in Parameters of SWIR Absorption Features under Different Conditions of Illumination.\n \n \n \n\n\n \n \n\n\n \n\n\n\n Remote Sensing, 6(9): 9104–9129. Sep. 2014.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
\n
\n\n\n
\n Hyperspectral imagery of a vertical mine face acquired from a field-based platform is used to evaluate the effects of different conditions of illumination on absorption feature parameters wavelength position, depth and width. Imagery was acquired at different times of the day under direct solar illumination and under diffuse illumination imposed by cloud cover. Imagery acquired under direct solar illumination did not show large amounts of variability in any absorption feature parameter; however, imagery acquired under cloud caused changes in absorption feature parameters. These included the introduction of a spurious absorption feature at wavelengths > 2250 nm and a shifting of the wavelength position of specific clay absorption features to longer or shorter wavelengths. Absorption feature depth increased. The spatial patterns of clay absorption in imagery acquired under similar conditions of direct illumination were preserved but not in imagery acquired under cloud. Kaolinite, ferruginous smectite and nontronite were identified and mapped on the mine face. Results were validated by comparing them with predictions from x-ray diffraction and laboratory hyperspectral imagery of samples acquired from the mine face. These results have implications for the collection of hyperspectral data from field-based platforms.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2013\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Evaluating techniques for learning a feedback controller for low-cost manipulators.\n \n \n \n \n\n\n \n Cliff, O. M.; and Monteiro, S. T.\n\n\n \n\n\n\n In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 704–709, Tokyo, Japan, Nov. 2013. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"EvaluatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{Cliff2013,\n  Title                    = {Evaluating techniques for learning a feedback controller for low-cost manipulators},\n  Author                   = {Cliff, Oliver M. and Monteiro, Sildomar T.},\n  Booktitle                = {IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},\n  Year                     = {2013},\n\n  Address                  = {Tokyo, Japan},\n  Month                    = {Nov.},\n  Organization             = {IEEE},\n  Pages                    = {704--709},\n\n  Abstract                 = {Robust manipulation with tractability in unstructured environments is a prominent hurdle in robotics. Learning algorithms to control robotic arms have introduced elegant solutions to the complexities faced in such systems. A novel method of Reinforcement Learning (RL), Gaussian Process Dynamic Programming (GPDP), yields promising results for closed-loop control of a low-cost manipulator however research surrounding most RL techniques lack breadth of comparable experiments into the viability of particular learning techniques on equivalent environments. We introduce several model-based learning agents as mechanisms to control a noisy, low-cost robotic system. The agents were tested in a simulated domain for learning closed-loop policies of a simple task with no prior information. Then, the fidelity of the simulations is confirmed by application of GPDP to a physical system.},\n  Doi                      = {10.1109/IROS.2013.6696428},\n  Gsid                     = {BJbdYPG6LGMC},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Oliver_IROS_2013.pdf}\n}\n\n
\n
\n\n\n
\n Robust manipulation with tractability in unstructured environments is a prominent hurdle in robotics. Learning algorithms to control robotic arms have introduced elegant solutions to the complexities faced in such systems. A novel method of Reinforcement Learning (RL), Gaussian Process Dynamic Programming (GPDP), yields promising results for closed-loop control of a low-cost manipulator however research surrounding most RL techniques lack breadth of comparable experiments into the viability of particular learning techniques on equivalent environments. We introduce several model-based learning agents as mechanisms to control a noisy, low-cost robotic system. The agents were tested in a simulated domain for learning closed-loop policies of a simple task with no prior information. Then, the fidelity of the simulations is confirmed by application of GPDP to a physical system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n W-band maritime target classification using high resolution range profiles.\n \n \n \n \n\n\n \n Jasinski, T.; Antipov, I.; Monteiro, S. T.; and Brooker, G.\n\n\n \n\n\n\n In International Conference on Radar (RADAR), pages 356–361, Adelaide, Australia, Sep. 2013. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"W-bandPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{jasinski2013w,\n  Title                    = {W-band maritime target classification using high resolution range profiles},\n  Author                   = {Jasinski, Tomasz and Antipov, Irina and Monteiro, Sildomar T. and Brooker, Graham},\n  Booktitle                = {International Conference on Radar (RADAR)},\n  Year                     = {2013},\n\n  Address                  = {Adelaide, Australia},\n  Month                    = {Sep.},\n  Organization             = {IEEE},\n  Pages                    = {356--361},\n\n  Abstract                 = {A W-band radar model was developed and tested using six point-scatterer maritime targets to assess the performance of four classifiers: traditional maximum correlation, naive Bayes, polynomial kernel support vector machine (SVM) and radial basis function (RBF) SVM. W-band is characterized by scintillations caused by subtle aspect and range changes, making classification potentially problematic. High resolution range profiles (HRRPs) were used as the feature vectors with minimal pre-processing. Training and test datasets were generated by rotating the targets in the azimuth plane. A receiver operating characteristic (ROC) analysis was conducted, as well as precision, recall, and accuracy measures derived, and confusion matrices obtained for each classifier under a specific operating point. It was found that the traditional correlation approach performed best under the given circumstances, closely followed by the two SVM approaches and naive Bayes. It was also found that different classifiers were better suited to classifying particular targets.},\n  Doi                      = {10.1109/RADAR.2013.6652013},\n  Gsid                     = {qUcmZB5y_30C},\n  Keywords                 = {W-band, radar, classification, ATR, maritime},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Jasinski_RARDAR_2013.pdf}\n}\n\n
\n
\n\n\n
\n A W-band radar model was developed and tested using six point-scatterer maritime targets to assess the performance of four classifiers: traditional maximum correlation, naive Bayes, polynomial kernel support vector machine (SVM) and radial basis function (RBF) SVM. W-band is characterized by scintillations caused by subtle aspect and range changes, making classification potentially problematic. High resolution range profiles (HRRPs) were used as the feature vectors with minimal pre-processing. Training and test datasets were generated by rotating the targets in the azimuth plane. A receiver operating characteristic (ROC) analysis was conducted, as well as precision, recall, and accuracy measures derived, and confusion matrices obtained for each classifier under a specific operating point. It was found that the traditional correlation approach performed best under the given circumstances, closely followed by the two SVM approaches and naive Bayes. It was also found that different classifiers were better suited to classifying particular targets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Submodular volume simplex analysis: A greedy algorithm for hyperspectral unmixing.\n \n \n \n \n\n\n \n Lee, S. H.; Monteiro, S. T.; and Scheding, S. J.\n\n\n \n\n\n\n In IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS), pages 1–4, Gainesville, FL, Jun. 2013. \n \n\n\n\n
\n\n\n\n \n \n \"SubmodularPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{lee2013submodular,\n  Title                    = {Submodular volume simplex analysis: A greedy algorithm for hyperspectral unmixing},\n  Author                   = {Lee, Seong H. and Monteiro, Sildomar T. and Scheding, Steven J.},\n  Booktitle                = {IEEE Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS)},\n  Year                     = {2013},\n\n  Address                  = {Gainesville, FL},\n  Month                    = {Jun.},\n  Pages                    = {1--4},\n\n  Abstract                 = {The hyperspectral unmixing problem can be formulated as a combinatorial optimization which selects the spectral vectors that maximize the volume of a simplex, with the assumptions that the dataset contain pure pixels and the mixture is linear. Submodularity presents an intuitive diminishing returns property which arises naturally in discrete and combinatorial optimization problems. Submodular functions enable the application of fast, greedy algorithms which possess near optimal approximation guarantees. This paper proposes a submodular greedy-based approach for solving the spectral unmixing problem by modifying the objective function to become a non-decreasing submodular function. Theoretical and experimental results are presented to demonstrate the feasibility of the method.},\n  Gsid                     = {IWHjjKOFINEC},\n  Keywords                 = {Submodular optimization, greedy algorithm, hyperspectral unmixing, endmember extraction},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Lee_WHISPERS_2013.pdf}\n}\n\n
\n
\n\n\n
\n The hyperspectral unmixing problem can be formulated as a combinatorial optimization which selects the spectral vectors that maximize the volume of a simplex, with the assumptions that the dataset contain pure pixels and the mixture is linear. Submodularity presents an intuitive diminishing returns property which arises naturally in discrete and combinatorial optimization problems. Submodular functions enable the application of fast, greedy algorithms which possess near optimal approximation guarantees. This paper proposes a submodular greedy-based approach for solving the spectral unmixing problem by modifying the objective function to become a non-decreasing submodular function. Theoretical and experimental results are presented to demonstrate the feasibility of the method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combining strong features for registration of hyperspectral and lidar data from field-based platforms.\n \n \n \n \n\n\n \n Monteiro, S. T.; Nieto, J.; Murphy, R.; Ramakrishnan, R.; and Taylor, Z.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 1210–1213, Melbourne, Australia, Jul. 2013. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"CombiningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{monteiro2013combining,\n  Title                    = {Combining strong features for registration of hyperspectral and lidar data from field-based platforms},\n  Author                   = {Monteiro, Sildomar T. and Nieto, Juan and Murphy, Richard and Ramakrishnan, Rishi and Taylor, Zachary},\n  Booktitle                = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  Year                     = {2013},\n\n  Address                  = {Melbourne, Australia},\n  Month                    = {Jul.},\n  Organization             = {IEEE},\n  Pages                    = {1210--1213},\n\n  Abstract                 = {This paper presents an approach to automatically register hyperspectral images with lidar point clouds using a combination of SIFT and SURF feature descriptors. The aim is to generate 3D terrain maps of the environment combining spectral and geometrical information. The datasets are acquired from field-based platforms which, due to the lack of georeferencing, cannot be simply fused and require a registration processing step. In addition, some applications, such as in mining, cannot rely on reliable GPS signal. The proposed method is validated using experimental data acquired from vertical mine walls.},\n  Doi                      = {10.1109/IGARSS.2013.6722997},\n  Gsid                     = {ZeXyd9-uunAC},\n  Keywords                 = {Hyperspectral imaging, feature extraction, image registration, sensor fusion, terrain mapping},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_IGARSS_2013.pdf}\n}\n\n
\n
\n\n\n
\n This paper presents an approach to automatically register hyperspectral images with lidar point clouds using a combination of SIFT and SURF feature descriptors. The aim is to generate 3D terrain maps of the environment combining spectral and geometrical information. The datasets are acquired from field-based platforms which, due to the lack of georeferencing, cannot be simply fused and require a registration processing step. In addition, some applications, such as in mining, cannot rely on reliable GPS signal. The proposed method is validated using experimental data acquired from vertical mine walls.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mapping the distribution of ferric iron minerals on a vertical mine face using derivative analysis of hyperspectral imagery (430-970 nm).\n \n \n \n \n\n\n \n Murphy, R. J.; and Monteiro, S. T.\n\n\n \n\n\n\n ISPRS Journal of Photogrammetry and Remote Sensing, 75(0): 29–39. Jan. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"MappingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{Murphy2013a,\n  Title                    = {Mapping the distribution of ferric iron minerals on a vertical mine face using derivative analysis of hyperspectral imagery (430-970 nm)},\n  Author                   = {Richard J. Murphy and Sildomar T. Monteiro},\n  Journal                  = {{ISPRS} Journal of Photogrammetry and Remote Sensing},\n  Year                     = {2013},\n\n  Month                    = {Jan.},\n  Number                   = {0},\n  Pages                    = {29--39},\n  Volume                   = {75},\n\n  Abstract                 = {Hyperspectral imagery is used to map the distribution of iron and separate iron ore from shale (a waste product) on a vertical mine face in an open-pit mine in the Pilbara, Western Australia. Vertical mine faces have complex surface geometries which cause large spatial variations in the amount of incident and reflected light. Methods used to analyse imagery must minimise these effects whilst preserving any spectral variations between rock types and minerals. Derivative analysis of spectra to the 1st-, 2nd- and 4th-order is used to do this. To quantify the relative amounts and distribution of iron, the derivative spectrum is integrated across the visible and near infrared spectral range (430-970 nm) and over those wavelength regions containing individual peaks and troughs associated with specific iron absorption features. As a test of this methodology, results from laboratory spectra acquired from representative rock samples were compared with total amounts of iron minerals from X-ray diffraction (XRD) analysis. Relationships between derivatives integrated over the visible near-infrared range and total amounts (% weight) of iron minerals were strongest for the 4th- and 2nd-derivative (R2= 0.77 and 0.74, respectively) and weakest for the 1st-derivative (R2= 0.56). Integrated values of individual peaks and troughs showed moderate to strong relationships in 2nd- (R2= 0.68-0.78) and 4th-derivative (R2= 0.49-0.78) spectra. The weakest relationships were found for peaks or troughs towards longer wavelengths. The same derivative methods were then applied to imagery to quantify relative amounts of iron minerals on a mine face. Before analyses, predictions were made about the relative abundances of iron in the different geological zones on the mine face, as mapped from field surveys. Integration of the whole spectral curve (430-970 nm) from the 2nd- and 4th-derivative gave results which were entirely consistent with predictions. Conversely, integration of the 1st-derivative gave results that did not fit with predictions nor distinguish between zones with very large and small amounts of iron oxide. Classified maps of ore and shale were created using a simple level-slice of the 1st-derivative reflectance at 702, 765 and 809 nm. Pixels classified as shale showed a similar distribution to kaolinite (an indicator of shales in the region), as mapped by the depth of the diagnostic kaolinite absorption feature at 2196 nm. Standard statistical measures of classification performance (accuracy, precision, recall and the Kappa coefficient of agreement) indicated that nearly all of the pixels were classified correctly using 1st-derivative reflectance at 765 and 809 nm. These results indicate that data from the VNIR (430-970 nm) can be used to quantify, without a priori knowledge, the total amount of iron minerals and to distinguish ore from shale on vertical mine faces.},\n  Doi                      = {10.1016/j.isprsjprs.2012.09.014},\n  Gsid                     = {7PzlFSSx8tAC},\n  ISSN                     = {0924-2716},\n  Keywords                 = {Mining, Iron ore, Remote sensing, Hyperspectral, Derivative analysis, Banded iron formation},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Murphy_JISPRS_2013.pdf}\n}\n\n
\n
\n\n\n
\n Hyperspectral imagery is used to map the distribution of iron and separate iron ore from shale (a waste product) on a vertical mine face in an open-pit mine in the Pilbara, Western Australia. Vertical mine faces have complex surface geometries which cause large spatial variations in the amount of incident and reflected light. Methods used to analyse imagery must minimise these effects whilst preserving any spectral variations between rock types and minerals. Derivative analysis of spectra to the 1st-, 2nd- and 4th-order is used to do this. To quantify the relative amounts and distribution of iron, the derivative spectrum is integrated across the visible and near infrared spectral range (430-970 nm) and over those wavelength regions containing individual peaks and troughs associated with specific iron absorption features. As a test of this methodology, results from laboratory spectra acquired from representative rock samples were compared with total amounts of iron minerals from X-ray diffraction (XRD) analysis. Relationships between derivatives integrated over the visible near-infrared range and total amounts (% weight) of iron minerals were strongest for the 4th- and 2nd-derivative (R2= 0.77 and 0.74, respectively) and weakest for the 1st-derivative (R2= 0.56). Integrated values of individual peaks and troughs showed moderate to strong relationships in 2nd- (R2= 0.68-0.78) and 4th-derivative (R2= 0.49-0.78) spectra. The weakest relationships were found for peaks or troughs towards longer wavelengths. The same derivative methods were then applied to imagery to quantify relative amounts of iron minerals on a mine face. Before analyses, predictions were made about the relative abundances of iron in the different geological zones on the mine face, as mapped from field surveys. Integration of the whole spectral curve (430-970 nm) from the 2nd- and 4th-derivative gave results which were entirely consistent with predictions. Conversely, integration of the 1st-derivative gave results that did not fit with predictions nor distinguish between zones with very large and small amounts of iron oxide. Classified maps of ore and shale were created using a simple level-slice of the 1st-derivative reflectance at 702, 765 and 809 nm. Pixels classified as shale showed a similar distribution to kaolinite (an indicator of shales in the region), as mapped by the depth of the diagnostic kaolinite absorption feature at 2196 nm. Standard statistical measures of classification performance (accuracy, precision, recall and the Kappa coefficient of agreement) indicated that nearly all of the pixels were classified correctly using 1st-derivative reflectance at 765 and 809 nm. These results indicate that data from the VNIR (430-970 nm) can be used to quantify, without a priori knowledge, the total amount of iron minerals and to distinguish ore from shale on vertical mine faces.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2012\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Evaluating Classification Techniques for Mapping Vertical Geology Using Field-Based Hyperspectral Sensors.\n \n \n \n \n\n\n \n Murphy, R. J.; Monteiro, S. T.; and Schneider, S.\n\n\n \n\n\n\n IEEE Transactions on Geoscience and Remote Sensing, 50(8): 3066-3080. Aug. 2012.\n \n\n\n\n
\n\n\n\n \n \n \"EvaluatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{Murphy2012,\n  Title                    = {Evaluating Classification Techniques for Mapping Vertical Geology Using Field-Based Hyperspectral Sensors},\n  Author                   = {Murphy, Richard J. and Monteiro, Sildomar T. and Schneider, Sven},\n  Journal                  = {IEEE Transactions on Geoscience and Remote Sensing},\n  Year                     = {2012},\n\n  Month                    = {Aug.},\n  Number                   = {8},\n  Pages                    = {3066-3080},\n  Volume                   = {50},\n\n  Abstract                 = {Hyperspectral data acquired from field-based platforms present new challenges for their analysis, particularly for complex vertical surfaces exposed to large changes in the geometry and intensity of illumination. The use of hyperspectral data to map rock types on a vertical mine face is demonstrated, with a view to providing real-time information for automated mining applications. The performance of two classification techniques, namely, spectral angle mapper (SAM) and support vector machines (SVMs), is compared rigorously using a spectral library acquired under various conditions of illumination. SAM and SVM are then applied to a mine face, and results are compared with geological boundaries mapped in the field. Effects of changing conditions of illumination, including shadow, were investigated by applying SAM and SVM to imagery acquired at different times of the day. As expected, classification of the spectral libraries showed that, on average, SVM gave superior results for SAM, although SAM performed better where spectra were acquired under conditions of shadow. In contrast, when applied to hypserspectral imagery of a mine face, SVM did not perform as well as SAM. Shadow, through its impact upon spectral curve shape and albedo, had a profound impact on classification using SAM and SVM.},\n  Doi                      = {10.1109/TGRS.2011.2178419},\n  Gsid                     = {aqlVkmm33-oC},\n  ISSN                     = {0196-2892},\n  Keywords                 = {Geology, hyperspectral imaging, minerals, mining industry, spectral analysis, support vector machines},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_TGRS_2011.pdf}\n}\n\n
\n
\n\n\n
\n Hyperspectral data acquired from field-based platforms present new challenges for their analysis, particularly for complex vertical surfaces exposed to large changes in the geometry and intensity of illumination. The use of hyperspectral data to map rock types on a vertical mine face is demonstrated, with a view to providing real-time information for automated mining applications. The performance of two classification techniques, namely, spectral angle mapper (SAM) and support vector machines (SVMs), is compared rigorously using a spectral library acquired under various conditions of illumination. SAM and SVM are then applied to a mine face, and results are compared with geological boundaries mapped in the field. Effects of changing conditions of illumination, including shadow, were investigated by applying SAM and SVM to imagery acquired at different times of the day. As expected, classification of the spectral libraries showed that, on average, SVM gave superior results for SAM, although SAM performed better where spectra were acquired under conditions of shadow. In contrast, when applied to hypserspectral imagery of a mine face, SVM did not perform as well as SAM. Shadow, through its impact upon spectral curve shape and albedo, had a profound impact on classification using SAM and SVM.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic rock recognition from drilling performance data.\n \n \n \n \n\n\n \n Zhou, H.; Hatherly, P.; Monteiro, S. T; Ramos, F.; Oppolzer, F.; Nettleton, E.; and Scheding, S.\n\n\n \n\n\n\n In IEEE International Conference on Robotics and Automation (ICRA), pages 3407–3412, Saint Paul, MN, May 2012. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{zhou2012automatic,\n  Title                    = {Automatic rock recognition from drilling performance data},\n  Author                   = {Zhou, Hang and Hatherly, Peter and Monteiro, Sildomar T and Ramos, Fabio and Oppolzer, Florian and Nettleton, Eric and Scheding, Steve},\n  Booktitle                = {IEEE International Conference on Robotics and Automation (ICRA)},\n  Year                     = {2012},\n\n  Address                  = {Saint Paul, MN},\n  Month                    = {May},\n  Organization             = {IEEE},\n  Pages                    = {3407--3412},\n\n  Abstract                 = {Automated rock recognition is a key step for building a fully autonomous mine. When characterizing rock types from drill performance data, the main challenge is that there is not an obvious one-to-one correspondence between the two. In this paper, a hybrid rock recognition approach is proposed which combines Gaussian Process (GP) regression with clustering. Drill performance data is also known as Measurement While Drilling (MWD) data and a rock hardness measure - Adjusted Penetration Rate (APR) is extracted using the raw data in discrete drill holes. GP regression is then applied to create a more dense APR distribution, followed by clustering which produces discrete class labels. No initial labelling is needed. Comparisons are made with alternative measures of rock hardness from MWD data as well as state-of-the-art GP classification. Experimental results from an actual mine site show the effectiveness of our proposed approach.},\n  Doi                      = {10.1109/ICRA.2012.6224745},\n  Gsid                     = {Wp0gIr-vW9MC},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Zhou_ICRA_2012.pdf}\n}\n\n
\n
\n\n\n
\n Automated rock recognition is a key step for building a fully autonomous mine. When characterizing rock types from drill performance data, the main challenge is that there is not an obvious one-to-one correspondence between the two. In this paper, a hybrid rock recognition approach is proposed which combines Gaussian Process (GP) regression with clustering. Drill performance data is also known as Measurement While Drilling (MWD) data and a rock hardness measure - Adjusted Penetration Rate (APR) is extracted using the raw data in discrete drill holes. GP regression is then applied to create a more dense APR distribution, followed by clustering which produces discrete class labels. No initial labelling is needed. Comparisons are made with alternative measures of rock hardness from MWD data as well as state-of-the-art GP classification. Experimental results from an actual mine site show the effectiveness of our proposed approach.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2011\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Embedded feature selection of hyperspectral bands with boosted decision trees.\n \n \n \n \n\n\n \n Monteiro, S. T.; and Murphy, R. J.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 2361–2364, Vancouver, BC, Jul. 2011. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"EmbeddedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{monteiro2011embedded,\n  Title                    = {Embedded feature selection of hyperspectral bands with boosted decision trees},\n  Author                   = {Monteiro, Sildomar T. and Murphy, Richard J.},\n  Booktitle                = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  Year                     = {2011},\n\n  Address                  = {Vancouver, BC},\n  Month                    = {Jul.},\n  Organization             = {IEEE},\n  Pages                    = {2361--2364},\n\n  Abstract                 = {Feature selection is an important step in hyperspectral analysis using machine learning for many applications, in particular to avoid the curse of dimensionality when there is limited available ground truth. This paper presents an approach to select hyperspectral bands using boosting. Boosting decision trees is an efficient and accurate classification technique that has been applied successfully to process hyperspectral data. The learned structure of the trees can provide insight about which bands are more relevant for the classification. We develop a method that takes into account the improvement obtained by each split of the tree ensemble and calculates a relative importance measure of the input features. The method was evaluated using hyperspectral data of rock samples from an iron ore mine in Australia. We show that by retaining only the most relevant features it is possible to reduce the computational load while retaining classification performance.},\n  Doi                      = {10.1109/IGARSS.2011.6049684},\n  Gsid                     = {ULOm3_A8WrAC},\n  Keywords                 = {Boosting, decision trees, feature selection, hyperspectral data},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_IGARSS_2011.pdf}\n}\n\n
\n
\n\n\n
\n Feature selection is an important step in hyperspectral analysis using machine learning for many applications, in particular to avoid the curse of dimensionality when there is limited available ground truth. This paper presents an approach to select hyperspectral bands using boosting. Boosting decision trees is an efficient and accurate classification technique that has been applied successfully to process hyperspectral data. The learned structure of the trees can provide insight about which bands are more relevant for the classification. We develop a method that takes into account the improvement obtained by each split of the tree ensemble and calculates a relative importance measure of the input features. The method was evaluated using hyperspectral data of rock samples from an iron ore mine in Australia. We show that by retaining only the most relevant features it is possible to reduce the computational load while retaining classification performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning 3D geological structure from drill-rig sensors for automated mining.\n \n \n \n \n\n\n \n Monteiro, S. T.; Van De Ven, J.; Ramos, F.; and Hatherly, P.\n\n\n \n\n\n\n In 22nd International Joint Conference on Artificial Intelligence (IJCAI), volume 3, pages 2500–2506, Barcelona, Spain, Jul. 2011. AAAI Press\n [Video]\n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2011learning,\n  Title                    = {Learning 3D geological structure from drill-rig sensors for automated mining},\n  Author                   = {Monteiro, Sildomar T. and Van De Ven, Joop and Ramos, Fabio and Hatherly, Peter},\n  Booktitle                = {22nd International Joint Conference on Artificial Intelligence (IJCAI)},\n  Year                     = {2011},\n\n  Address                  = {Barcelona, Spain},\n  Month                    = {Jul.},\n  Note                     = {<a href="http://ijcai-11.iiia.csic.es/video/29" target="_blank">[Video]</a>},\n  Organization             = {AAAI Press},\n  Pages                    = {2500--2506},\n  Volume                   = {3},\n\n  Abstract                 = {This paper addresses one of the key components of the mining process: the geological prediction of natural resources from spatially distributed measurements. We present a novel approach combining undirected graphical models with ensemble classifiers to provide 3D geological models from multiple sensors installed in an autonomous drill rig. Drill sensor measurements used for drilling automation, known as measurement-while-drilling (MWD) data, have the potential to provide an estimate of the geological properties of the rocks being drilled. The proposed method maps MWD parameters to rock types while considering spatial relationships, i.e., associating measurements obtained from neighboring regions. We use a conditional random field with local information provided by boosted decision trees to jointly reason about the rock categories of neighboring measurements. To validate the approach, MWD data was collected from a drill rig operating at an iron ore mine. Graphical models of the 3D structure present in real data sets possess a high number of nodes, edges and cycles, making them intractable for exact inference. We provide a comparison of three approximate inference methods to calculate the most probable distribution of class labels. The empirical results demonstrate the benefits of spatial modeling through graphical models to improve classification performance.},\n  Comment                  = {Acceptance rate: 17%},\n  Doi                      = {10.5591/978-1-57735-516-8/IJCAI11-416},\n  Gsid                     = {0EnyYjriUFMC},\n  Url                      = {http://ijcai.org/papers11/Papers/IJCAI11-416.pdf}\n}\n\n
\n
\n\n\n
\n This paper addresses one of the key components of the mining process: the geological prediction of natural resources from spatially distributed measurements. We present a novel approach combining undirected graphical models with ensemble classifiers to provide 3D geological models from multiple sensors installed in an autonomous drill rig. Drill sensor measurements used for drilling automation, known as measurement-while-drilling (MWD) data, have the potential to provide an estimate of the geological properties of the rocks being drilled. The proposed method maps MWD parameters to rock types while considering spatial relationships, i.e., associating measurements obtained from neighboring regions. We use a conditional random field with local information provided by boosted decision trees to jointly reason about the rock categories of neighboring measurements. To validate the approach, MWD data was collected from a drill rig operating at an iron ore mine. Graphical models of the 3D structure present in real data sets possess a high number of nodes, edges and cycles, making them intractable for exact inference. We provide a comparison of three approximate inference methods to calculate the most probable distribution of class labels. The empirical results demonstrate the benefits of spatial modeling through graphical models to improve classification performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feature selection with PSO and kernel methods for hyperspectral classification.\n \n \n \n \n\n\n \n Tjiong, A. S.; and Monteiro, S. T\n\n\n \n\n\n\n In IEEE Congress on Evolutionary Computation (CEC), pages 1762–1769, New Orleans, LA, Jun. 2011. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"FeaturePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{tjiong2011feature,\n  Title                    = {Feature selection with PSO and kernel methods for hyperspectral classification},\n  Author                   = {Tjiong, Anthony SJ and Monteiro, Sildomar T},\n  Booktitle                = {IEEE Congress on Evolutionary Computation (CEC)},\n  Year                     = {2011},\n\n  Address                  = {New Orleans, LA},\n  Month                    = {Jun.},\n  Organization             = {IEEE},\n  Pages                    = {1762--1769},\n\n  Abstract                 = {Hyperspectral image data has great potential to identify and classify the chemical composition of materials remotely. Factors limiting the use of hyperspectral sensors in practical land-based applications, such as robotics and mining, are the complexity and cost of data acquisition, and the processing time required for the subsequent analysis. This is mainly due to the high dimensional and high volume nature of hyperspectral image data. In this paper, we propose to combine a feature selection method, based on particle swarm optimization (PSO), with a kernel method, support vector machines (SVM), to reduce the dimensionality of hyperspectral data for classification. We evaluate several different kernels, including some optimized for hyperspectral analysis. In particular, a recent kernel called observation angle dependent (OAD) kernel, originally designed for Gaussian Process regression, was extended for SVM classification. The SVM with the optimized kernel was then applied to induce the feature selection of a binary version of PSO. We validate the method using hyperspectral data sets acquired of rock samples from Western Australia. The empirical results demonstrate that our method is able to efficiently reduce the number of features while keeping, or even improving, the performance of the SVM classifier.},\n  Doi                      = {10.1109/CEC.2011.5949828},\n  Gsid                     = {5nxA0vEk-isC},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Tjiong_CEC_2011.pdf}\n}\n\n
\n
\n\n\n
\n Hyperspectral image data has great potential to identify and classify the chemical composition of materials remotely. Factors limiting the use of hyperspectral sensors in practical land-based applications, such as robotics and mining, are the complexity and cost of data acquisition, and the processing time required for the subsequent analysis. This is mainly due to the high dimensional and high volume nature of hyperspectral image data. In this paper, we propose to combine a feature selection method, based on particle swarm optimization (PSO), with a kernel method, support vector machines (SVM), to reduce the dimensionality of hyperspectral data for classification. We evaluate several different kernels, including some optimized for hyperspectral analysis. In particular, a recent kernel called observation angle dependent (OAD) kernel, originally designed for Gaussian Process regression, was extended for SVM classification. The SVM with the optimized kernel was then applied to induce the feature selection of a binary version of PSO. We validate the method using hyperspectral data sets acquired of rock samples from Western Australia. The empirical results demonstrate that our method is able to efficiently reduce the number of features while keeping, or even improving, the performance of the SVM classifier.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Hybrid GP Regression and Clustering Approach for Characterizing Rock Properties from Drilling Data.\n \n \n \n \n\n\n \n Zhou, H; Hatherly, P; Monteiro, S. T.; Ramos, F; Oppolzer, F; and Nettleton, E\n\n\n \n\n\n\n Technical Report Tech. Report ACFR-TR-2011-001, 2011.\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@TechReport{Zhou2011,\n  Title                    = {A Hybrid GP Regression and Clustering Approach for Characterizing Rock Properties from Drilling Data},\n  Author                   = {Zhou, H and Hatherly, P and Monteiro, Sildomar T. and Ramos, F and Oppolzer, F and Nettleton, E},\n  Institution              = {Tech. Report ACFR-TR-2011-001},\n  Year                     = {2011},\n\n  Url                      = {http://www.acfr.usyd.edu.au/techreports/2011-001.shtml}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2010\n \n \n (5)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Rock Recognition From MWD Data: A Comparative Study of Boosting, Neural Networks, and Fuzzy Logic.\n \n \n \n \n\n\n \n Kadkhodaie-Ilkhchi, A.; Monteiro, S. T.; Ramos, F.; and Hatherly, P.\n\n\n \n\n\n\n IEEE Geoscience and Remote Sensing Letters, 7(4): 680-684. Oct. 2010.\n \n\n\n\n
\n\n\n\n \n \n \"RockPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{kadkhodaie2010rock,\n  Title                    = {Rock Recognition From MWD Data: A Comparative Study of Boosting, Neural Networks, and Fuzzy Logic},\n  Author                   = {Kadkhodaie-Ilkhchi, A. and Monteiro, Sildomar T. and Ramos, F. and Hatherly, P.},\n  Journal                  = {IEEE Geoscience and Remote Sensing Letters},\n  Year                     = {2010},\n\n  Month                    = {Oct.},\n  Number                   = {4},\n  Pages                    = {680-684},\n  Volume                   = {7},\n\n  Abstract                 = {Measurement-while-drilling (MWD) data recorded from drill rigs can provide a valuable estimation of the type and strength of the rocks being drilled. Typical MWD sensors include bit pressure, rotation pressure, pull-down pressure, pull-down rate, and head speed. This letter presents an empirical comparison of the statistical performance, ease of implementation, and computational efficiency associated with three machine-learning techniques. A recently proposed method, boosting, is compared with two well-established methods, neural networks and fuzzy logic, used as benchmarks. MWD data were acquired from blast holes at an iron ore mine in Western Australia. The boreholes intersected a number of rock types including shale, iron ore, and banded iron formation. Boosting and neural networks presented the best performance overall. However, from the viewpoint of implementation simplicity and computational load, boosting outperformed the other two methods.},\n  Doi                      = {10.1109/LGRS.2010.2046312},\n  Gsid                     = {3fE2CSJIrl8C},\n  ISSN                     = {1545-598X},\n  Keywords                 = {Pattern recognition, Boosting, Neural networks, Fuzzy logic, Measurement-while-drilling, Geological modeling},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_GRSL_2010.pdf}\n}\n\n
\n
\n\n\n
\n Measurement-while-drilling (MWD) data recorded from drill rigs can provide a valuable estimation of the type and strength of the rocks being drilled. Typical MWD sensors include bit pressure, rotation pressure, pull-down pressure, pull-down rate, and head speed. This letter presents an empirical comparison of the statistical performance, ease of implementation, and computational efficiency associated with three machine-learning techniques. A recently proposed method, boosting, is compared with two well-established methods, neural networks and fuzzy logic, used as benchmarks. MWD data were acquired from blast holes at an iron ore mine in Western Australia. The boreholes intersected a number of rock types including shale, iron ore, and banded iron formation. Boosting and neural networks presented the best performance overall. However, from the viewpoint of implementation simplicity and computational load, boosting outperformed the other two methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Automatic Hyperspectral Data Analysis: A machine learning approach to high dimensional feature extraction.\n \n \n \n\n\n \n Monteiro, S. T.\n\n\n \n\n\n\n Technical Report Saarbrucken, Germany: VDM Verlag, 2010.\n [Buy from Amazon]\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@TechReport{monteiro2010automatic,\n  Title                    = {Automatic Hyperspectral Data Analysis: A machine learning approach to high dimensional feature extraction},\n  Author                   = {Monteiro, Sildomar T.},\n  Institution              = {Saarbrucken, Germany: VDM Verlag},\n  Year                     = {2010},\n  Note                     = {<a href="http://www.amazon.com/Automatic-Hyperspectral-Data-Analysis-dimensional/dp/363925516X/" target="_blank">[Buy from Amazon]</a>},\n\n  Comment                  = {ISBN: 978-3639255164},\n  Publisher                = {VDM Verlag Dr. M{\\"u}ller}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Calibrating probabilities for hyperspectral classification of rock types.\n \n \n \n \n\n\n \n Monteiro, S. T; and Murphy, R. J\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 2800–2803, Honolulu, HI, Jul. 2010. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"CalibratingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2010calibrating,\n  Title                    = {Calibrating probabilities for hyperspectral classification of rock types},\n  Author                   = {Monteiro, Sildomar T and Murphy, Richard J},\n  Booktitle                = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  Year                     = {2010},\n\n  Address                  = {Honolulu, HI},\n  Month                    = {Jul.},\n  Organization             = {IEEE},\n  Pages                    = {2800--2803},\n\n  Abstract                 = {This paper investigates the performance of machine learning methods for classifying rock types from hyperspectral data. The main objective is to test the impact on classification error rate of calibrating the model's output into class probability estimates. The base classifiers included in this study are: boosted decision trees, support vector machines and logistic regression. The standard algorithm for some of these methods provides a non-probabilistic, hard decision as output. For those methods, posterior class probability estimates were approximated by fitting a sigmoid function to the classifier predictions. To perform multi-class classification, a one-versus-all approach was used. The different methods were compared using hyperspectral data acquired from ore-bearing rocks under different environmental conditions. The calibration of class probabilities improved the overall performance for almost all algorithms tested; an improvement of over 10% was observed in some cases.},\n  Doi                      = {10.1109/IGARSS.2010.5649482},\n  Gsid                     = {KlAtU1dfN6UC},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_IGARSS_2010.pdf}\n}\n
\n
\n\n\n
\n This paper investigates the performance of machine learning methods for classifying rock types from hyperspectral data. The main objective is to test the impact on classification error rate of calibrating the model's output into class probability estimates. The base classifiers included in this study are: boosted decision trees, support vector machines and logistic regression. The standard algorithm for some of these methods provides a non-probabilistic, hard decision as output. For those methods, posterior class probability estimates were approximated by fitting a sigmoid function to the classifier predictions. To perform multi-class classification, a one-versus-all approach was used. The different methods were compared using hyperspectral data acquired from ore-bearing rocks under different environmental conditions. The calibration of class probabilities improved the overall performance for almost all algorithms tested; an improvement of over 10% was observed in some cases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3D geological modelling using laser and hyperspectral data.\n \n \n \n \n\n\n \n Nieto, J. I.; Monteiro, S. T.; and Viejo, D.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 4568–4571, Honolulu, HI, Jul. 2010. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"3DPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{nieto20103d,\n  Title                    = {3D geological modelling using laser and hyperspectral data},\n  Author                   = {Nieto, Juan I. and Monteiro, Sildomar T. and Viejo, Diego},\n  Booktitle                = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  Year                     = {2010},\n\n  Address                  = {Honolulu, HI},\n  Month                    = {Jul.},\n  Organization             = {IEEE},\n  Pages                    = {4568--4571},\n\n  Abstract                 = {This paper presents a ground based system for mapping the geology and the geometry of the environment remotely. The main objective of this work is to develop a framework for a mobile robotic platform that can build 3D geological maps. We investigate classification and registration algorithms that can work without any manual intervention. The system capabilities are demonstrated with data acquired from a working mine environment. Geological maps are built by applying classification techniques to hyperspectral images of the rocks' surface. The result from the classification is then fused with laser images to form the 3D geological models of the environment.},\n  Doi                      = {10.1109/IGARSS.2010.5651553},\n  Gsid                     = {_kc_bZDykSQC},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Nieto_IGARSS_2010.pdf}\n}\n\n
\n
\n\n\n
\n This paper presents a ground based system for mapping the geology and the geometry of the environment remotely. The main objective of this work is to develop a framework for a mobile robotic platform that can build 3D geological maps. We investigate classification and registration algorithms that can work without any manual intervention. The system capabilities are demonstrated with data acquired from a working mine environment. Geological maps are built by applying classification techniques to hyperspectral images of the rocks' surface. The result from the classification is then fused with laser images to form the 3D geological models of the environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automated rock recognition with wavelet feature space projection and Gaussian Process classification.\n \n \n \n \n\n\n \n Zhou, H.; Monteiro, S. T.; Hatherly, P.; Ramos, F.; Nettleton, E.; and Oppolzer, F.\n\n\n \n\n\n\n In IEEE International Conference on Robotics and Automation (ICRA), pages 4444–4450, Anchorage, AK, May 2010. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"AutomatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{zhou2010automated,\n  Title                    = {Automated rock recognition with wavelet feature space projection and Gaussian Process classification},\n  Author                   = {Zhou, Hang and Monteiro, Sildomar T. and Hatherly, Peter and Ramos, Fabio and Nettleton, Eric and Oppolzer, Florian},\n  Booktitle                = {IEEE International Conference on Robotics and Automation (ICRA)},\n  Year                     = {2010},\n\n  Address                  = {Anchorage, AK},\n  Month                    = {May},\n  Organization             = {IEEE},\n  Pages                    = {4444--4450},\n\n  Abstract                 = {A crucial component of an autonomous mine is the ability to infer rock types from mechanical measurements of a drill rig. The major difficulty lies in that there is not a clear one to one correspondence between the mechanical measurements and the rock type due to the mechanical noise as well as the variety of the rock geology. This paper proposes a novel wavelet feature space projection approach to robustly classify rock types from drilling data with Gaussian Process classification. Instead of applying Gaussian Process classifier directly to the given measurement pieces, a group of wavelet features are extracted from the neighboring region of a specific data point. Gaussian Process classification is then carried out on the new extracted wavelet features. By putting neighboring data points into consideration rather than dealing with each data point individually, the underlying pattern can be better captured and more robust to noise and data variations. Experimental results on synthetic data as well as varied real world drilling data have shown the effectiveness of our approach.},\n  Doi                      = {10.1109/ROBOT.2010.5509605},\n  Gsid                     = {Se3iqnhoufwC},\n  Url                      = {http://db.acfr.usyd.edu.au/content.php/237.html?publicationid=664}\n}\n\n
\n
\n\n\n
\n A crucial component of an autonomous mine is the ability to infer rock types from mechanical measurements of a drill rig. The major difficulty lies in that there is not a clear one to one correspondence between the mechanical measurements and the rock type due to the mechanical noise as well as the variety of the rock geology. This paper proposes a novel wavelet feature space projection approach to robustly classify rock types from drilling data with Gaussian Process classification. Instead of applying Gaussian Process classifier directly to the given measurement pieces, a group of wavelet features are extracted from the neighboring region of a specific data point. Gaussian Process classification is then carried out on the new extracted wavelet features. By putting neighboring data points into consideration rather than dealing with each data point individually, the underlying pattern can be better captured and more robust to noise and data variations. Experimental results on synthetic data as well as varied real world drilling data have shown the effectiveness of our approach.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2009\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Applying boosting for hyperspectral classification of ore-bearing rocks.\n \n \n \n \n\n\n \n Monteiro, S. T.; Murphy, R. J.; Ramos, F.; and Nieto, J.\n\n\n \n\n\n\n In IEEE International Workshop on Machine Learning for Signal Processing (MLSP), pages 1–6, Grenoble, France, Sep. 2009. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"ApplyingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2009applying,\n  Title                    = {Applying boosting for hyperspectral classification of ore-bearing rocks},\n  Author                   = {Monteiro, Sildomar T. and Murphy, Richard J. and Ramos, Fabio and Nieto, Juan},\n  Booktitle                = {IEEE International Workshop on Machine Learning for Signal Processing (MLSP)},\n  Year                     = {2009},\n\n  Address                  = {Grenoble, France},\n  Month                    = {Sep.},\n  Organization             = {IEEE},\n  Pages                    = {1--6},\n\n  Abstract                 = {Hyperspectral sensors provide a powerful tool for nondestructive analysis of rocks. While classification of spectrally distinct materials can be performed by traditional methods, identification of different rock types or grades composed of similar materials remains a challenge because spectra are in many cases similar. In this paper, we investigate the application of boosting algorithms to classify hyperspectral data of ore rock samples into multiple discrete categories. Two variants of boosting, GentleBoost and LogitBoost, were implemented and compared with support vector machines as benchmark. Two pre-processing transformations that may improve classification accuracy were investigated: derivative analysis and smoothing, both calculated by the Savitzky-Golay method. To assess the performance of the algorithms over noisy data, white Gaussian noise was added at various levels to the data set. We present experimental results using hyperspectral data collected from rock samples from an iron ore mine.},\n  Doi                      = {10.1109/MLSP.2009.5306219},\n  Gsid                     = {hqOjcs7Dif8C},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_MLSP_2009.pdf}\n}\n\n
\n
\n\n\n
\n Hyperspectral sensors provide a powerful tool for nondestructive analysis of rocks. While classification of spectrally distinct materials can be performed by traditional methods, identification of different rock types or grades composed of similar materials remains a challenge because spectra are in many cases similar. In this paper, we investigate the application of boosting algorithms to classify hyperspectral data of ore rock samples into multiple discrete categories. Two variants of boosting, GentleBoost and LogitBoost, were implemented and compared with support vector machines as benchmark. Two pre-processing transformations that may improve classification accuracy were investigated: derivative analysis and smoothing, both calculated by the Savitzky-Golay method. To assess the performance of the algorithms over noisy data, white Gaussian noise was added at various levels to the data set. We present experimental results using hyperspectral data collected from rock samples from an iron ore mine.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Conditional random fields for rock characterization using drill measurements.\n \n \n \n \n\n\n \n Monteiro, S. T; Ramos, F.; and Hatherly, P.\n\n\n \n\n\n\n In International Conference on Machine Learning and Applications (ICMLA), pages 366–371, Miami Beach, FL, Dec. 2009. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"ConditionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2009conditional,\n  Title                    = {Conditional random fields for rock characterization using drill measurements},\n  Author                   = {Monteiro, Sildomar T and Ramos, Fabio and Hatherly, Peter},\n  Booktitle                = {International Conference on Machine Learning and Applications (ICMLA)},\n  Year                     = {2009},\n\n  Address                  = {Miami Beach, FL},\n  Month                    = {Dec.},\n  Organization             = {IEEE},\n  Pages                    = {366--371},\n\n  Abstract                 = {Analysis of drill performance data provides a powerful method for estimating subsurface geology. While there have been studies relating such measurement-while-drilling (MWD) parameters to rock properties, none of them has attempted to model context, that is, to associate local measurements with measurements obtained in neighbouring regions. This paper proposes a novel approach to infer geology from drill measurements by incorporating spatial relationships through a Conditional Random Field (CRF) framework. A boosting algorithm is used as a local classifier mapping drill measurements to corresponding geological categories. The CRF then uses this local information in conjunction with neighbouring measurements to jointly reason about their categories. Model parameters are learned from training data by maximizing the pseudo-likelihood. The probability distribution of classified borehole sections is calculated using belief propagation. We present experimental results of applying the method to MWD data collected from a semi-autonomous drill rig at an iron ore mine in Western Australia.},\n  Doi                      = {10.1109/ICMLA.2009.80},\n  Gsid                     = {UeHWp8X0CEIC},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_ICMLA_2009.pdf}\n}\n\n
\n
\n\n\n
\n Analysis of drill performance data provides a powerful method for estimating subsurface geology. While there have been studies relating such measurement-while-drilling (MWD) parameters to rock properties, none of them has attempted to model context, that is, to associate local measurements with measurements obtained in neighbouring regions. This paper proposes a novel approach to infer geology from drill measurements by incorporating spatial relationships through a Conditional Random Field (CRF) framework. A boosting algorithm is used as a local classifier mapping drill measurements to corresponding geological categories. The CRF then uses this local information in conjunction with neighbouring measurements to jointly reason about their categories. Model parameters are learned from training data by maximizing the pseudo-likelihood. The probability distribution of classified borehole sections is calculated using belief propagation. We present experimental results of applying the method to MWD data collected from a semi-autonomous drill rig at an iron ore mine in Western Australia.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning CRF Models from Drill Rig Sensors for Autonomous Mining.\n \n \n \n \n\n\n \n Monteiro, S. T.; Ramos, F.; and Hatherly, P.\n\n\n \n\n\n\n NIPS Workshop: Learning from Multiple Sources with Applications to Robotics, Dec. 2009.\n [Video]\n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Misc{monteiro2009learning,\n  Title                    = {Learning CRF Models from Drill Rig Sensors for Autonomous Mining},\n\n  Author                   = {Monteiro, Sildomar T. and Ramos, Fabio and Hatherly, Peter},\n  HowPublished             = {NIPS Workshop: Learning from Multiple Sources with Applications to Robotics},\n  Month                    = {Dec.},\n  Note                     = {<a href="http://videolectures.net/nipsworkshops09_monteiro_lmdr/" target="_blank">[Video]</a>},\n  Year                     = {2009},\n\n  Address                  = {Whistler, Canada},\n  Url                      = {http://www.dcs.gla.ac.uk/~srogers/lms09/abstracts_final/talks/lmsar_monteiro.pdf}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iron Ore Rock Recognition Trials.\n \n \n \n \n\n\n \n Ramos, F.; Hatherly, P.; and Monteiro, S. T.\n\n\n \n\n\n\n Technical Report Tech. Report ACFR-TR-2009-001, 2009.\n \n\n\n\n
\n\n\n\n \n \n \"IronPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@TechReport{Ramos2009,\n  Title                    = {Iron Ore Rock Recognition Trials},\n  Author                   = {Ramos, Fabio and Hatherly, Peter and Monteiro, Sildomar T.},\n  Institution              = {Tech. Report ACFR-TR-2009-001},\n  Year                     = {2009},\n\n  Url                      = {http://db.acfr.usyd.edu.au/content.php/237.html?publicationid=516}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the development of a hyperspectral library for autonomous mining.\n \n \n \n \n\n\n \n Schneider, S.; Murphy, R. J.; Monteiro, S. T.; and Nettleton, E.\n\n\n \n\n\n\n In Australasian Conference on Robotics and Automation (ACRA), Sydney, Australia, Dec. 2009. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{schneider2009development,\n  Title                    = {On the development of a hyperspectral library for autonomous mining},\n  Author                   = {Schneider, Sve and Murphy, Richard J. and Monteiro, Sildomar T. and Nettleton, Eric},\n  Booktitle                = {Australasian Conference on Robotics and Automation (ACRA)},\n  Year                     = {2009},\n\n  Address                  = {Sydney, Australia},\n  Month                    = {Dec.},\n\n  Gsid                     = {dhFuZR0502QC},\n  Journal                  = {Australasian Conference on Robotics and Automation},\n  Url                      = {http://www.asdi.com/getmedia/c3d6e806-e693-4a07-a20e-48eabf3ba807/On-the-development-of-a-hyperspectral-library-for-autonomous-mining-systems.pdf}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral feature selection for automated rock recognition using gaussian process classification.\n \n \n \n \n\n\n \n Zhou, H.; Monteiro, S. T.; Hatherly, P.; Ramos, F.; Nettleton, E.; and Oppolzer, F.\n\n\n \n\n\n\n In Australasian Conference on Robotics and Automation (ACRA), Sydney, Australia, Dec. 2009. \n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{zhou2009spectral,\n  Title                    = {Spectral feature selection for automated rock recognition using gaussian process classification},\n  Author                   = {Zhou, H. and Monteiro, Sildomar T. and Hatherly, P. and Ramos, F. and Nettleton, E. and Oppolzer, F.},\n  Booktitle                = {Australasian Conference on Robotics and Automation (ACRA)},\n  Year                     = {2009},\n\n  Address                  = {Sydney, Australia},\n  Month                    = {Dec.},\n\n  Gsid                     = {W7OEmFMy1HYC},\n  Journal                  = {Proceedings of Australian Conference on Robotics and Automation},\n  Url                      = {http://www.araa.asn.au/acra/acra2009/papers/pap118s1.pdf}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2008\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Studies on human skin extraction from hyperspectral data using particle swarm optimization.\n \n \n \n \n\n\n \n Edanaga, T.; Minekawa, Y.; Monteiro, S. T.; and Kosugi, Y.\n\n\n \n\n\n\n Journal of the Japan Society of Photogrammetry and Remote Sensing, 47(3): 23–36. Mar. 2008.\n \n\n\n\n
\n\n\n\n \n \n \"StudiesPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Article{edanaga2008studies,\n  Title                    = {Studies on human skin extraction from hyperspectral data using particle swarm optimization},\n  Author                   = {Edanaga, Takayuki and Minekawa, Yohei and Monteiro, Sildomar T. and Kosugi, Yukio},\n  Journal                  = {Journal of the Japan Society of Photogrammetry and Remote Sensing},\n  Year                     = {2008},\n\n  Month                    = {Mar.},\n  Number                   = {3},\n  Pages                    = {23--36},\n  Volume                   = {47},\n\n  Abstract                 = {If the automatic detection of the victims is feasible after large-scale disasters, the search efficiency would be significantly improved compared to the traditional visual search method from aircraft. In order to extract the human skin correctly, we focus on the training of an Artificial Neural Network (ANN) by the hyperspectral data. From the standpoint of avoiding the over-fitting problem generated by multi-channel inputs of the hyper-spectral data, it is necessary to be trained by the optimal bands. In this paper, we propose the coupled search method for the optimal number and combination of bands in order to extract the target from the hyperspectral data. The coupled search method is composed of two search methods. The first is a new search method of the maximum value of the evaluation function which searches the optimal number of bands from the non-training data. The other is the search method using the Particle Swarm Optimization which searched the optimal combination of bands from the training data. The ANN is trained by the selected combination of bands, and the results are evaluated. Moreover, the trained network obtained using the coupled search method is compared with the ANN trained by all the bands and the Normalized Difference Human Index.},\n  Doi                      = {10.4287/jsprs.47.3_23},\n  Gsid                     = {qxL8FJ1GzNcC},\n  Url                      = {https://www.jstage.jst.go.jp/article/jsprs1975/47/3/47_3_23/_pdf}\n}\n\n
\n
\n\n\n
\n If the automatic detection of the victims is feasible after large-scale disasters, the search efficiency would be significantly improved compared to the traditional visual search method from aircraft. In order to extract the human skin correctly, we focus on the training of an Artificial Neural Network (ANN) by the hyperspectral data. From the standpoint of avoiding the over-fitting problem generated by multi-channel inputs of the hyper-spectral data, it is necessary to be trained by the optimal bands. In this paper, we propose the coupled search method for the optimal number and combination of bands in order to extract the target from the hyperspectral data. The coupled search method is composed of two search methods. The first is a new search method of the maximum value of the evaluation function which searches the optimal number of bands from the non-training data. The other is the search method using the Particle Swarm Optimization which searched the optimal combination of bands from the training data. The ANN is trained by the selected combination of bands, and the results are evaluated. Moreover, the trained network obtained using the coupled search method is compared with the ANN trained by all the bands and the Normalized Difference Human Index.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Altitude Hyperspectral Imaging of Naruko Integrated Field for the Interpretation of High-Altitude Observations.\n \n \n \n \n\n\n \n Kosugi, Y.; Guillaume, D.; Takabayashi, Y.; Monteiro, S. T.; Yamaki, M.; Uto, K.; and Saito, G.\n\n\n \n\n\n\n In 6th International Symposium on Integrated Field Science, volume A-2, pages 135– 136, Sendai, Japan, 2008. \n \n\n\n\n
\n\n\n\n \n \n \"Low-AltitudePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{kosugi2008low,\n  Title                    = {Low-Altitude Hyperspectral Imaging of Naruko Integrated Field for the Interpretation of High-Altitude Observations},\n  Author                   = {Kosugi, Yukio and Guillaume, Desjardins and Takabayashi, Yuji and Monteiro, Sildomar T. and Yamaki, Makoto and Uto, Kuniaki and Saito, Genya},\n  Booktitle                = {6th International Symposium on Integrated Field Science},\n  Year                     = {2008},\n\n  Address                  = {Sendai, Japan},\n  Pages                    = {135-- 136},\n  Volume                   = {A-2},\n\n  Journal                  = {6th Intl. Symposium on Integrated Field Science, p. A-2, Sendai, Japan},\n  Url                      = {http://ir.library.tohoku.ac.jp/re/bitstream/10097/48802/1/AA12005506-2009-6-135.pdf}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral image classification of grass species in Northeast Japan.\n \n \n \n \n\n\n \n Monteiro, S. T.; Uto, K.; Kosugi, Y.; Oda, K.; Iino, Y.; and Saito, G.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), volume 4, pages 399–402, Boston, MA, Jul. 2008. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"HyperspectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{monteiro2008hyperspectral,\n  Title                    = {Hyperspectral image classification of grass species in Northeast Japan},\n  Author                   = {Monteiro, Sildomar T. and Uto, Kuniaki and Kosugi, Yukio and Oda, Kunio and Iino, Yoshiyuki and Saito, Genya},\n  Booktitle                = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  Year                     = {2008},\n\n  Address                  = {Boston, MA},\n  Month                    = {Jul.},\n  Organization             = {IEEE},\n  Pages                    = {399--402},\n  Volume                   = {4},\n\n  Abstract                 = {This paper investigates the application of artificial neural networks for classifying grass species from hyperspectral image data. High-resolution spatial and spectral data of localized fields were collected using a hyperspectral sensor mounted on the tip of a crane. The hyperspectral datasets are processed using normalization and second derivative in order to reduce the effect of variations in the intensity level of reflectance and to improve the classification accuracy and generalization performance of the neural network-based model. An experimental comparison of the pre-processing methods shows that the best classification accuracy is obtained by the second derivative transformed dataset. Normalization, and a combination of both methods, did not improve accuracy of the neural network models of our experimental datasets more than simple raw reflectance.},\n  Doi                      = {10.1109/IGARSS.2008.4779742},\n  Keywords                 = {Hyperspectral, image classification, neural networks, normalization, second derivative},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_IGARSS_2008.pdf}\n}\n\n
\n
\n\n\n
\n This paper investigates the application of artificial neural networks for classifying grass species from hyperspectral image data. High-resolution spatial and spectral data of localized fields were collected using a hyperspectral sensor mounted on the tip of a crane. The hyperspectral datasets are processed using normalization and second derivative in order to reduce the effect of variations in the intensity level of reflectance and to improve the classification accuracy and generalization performance of the neural network-based model. An experimental comparison of the pre-processing methods shows that the best classification accuracy is obtained by the second derivative transformed dataset. Normalization, and a combination of both methods, did not improve accuracy of the neural network models of our experimental datasets more than simple raw reflectance.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2007\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Particle Swarms for Feature Extraction of Hyperspectral Data.\n \n \n \n \n\n\n \n Monteiro, S. T; and Kosugi, Y.\n\n\n \n\n\n\n IEICE Transactions on Information and Systems, E90-D(7): 1038–1046. Jul. 2007.\n [Code]\n\n\n\n
\n\n\n\n \n \n \"ParticlePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{Monteiro2007,\n  Title                    = {Particle Swarms for Feature Extraction of Hyperspectral Data},\n  Author                   = {Monteiro, Sildomar T and Kosugi, Yukio},\n  Journal                  = {IEICE Transactions on Information and Systems},\n  Year                     = {2007},\n\n  Month                    = {Jul.},\n  Note                     = {<a href="files/PSO_feature_selection.zip" target="_blank">[Code]</a>},\n  Number                   = {7},\n  Pages                    = {1038--1046},\n  Volume                   = {E90-D},\n\n  Abstract                 = {This paper presents a novel feature extraction algorithm based on particle swarms for processing hyperspectral imagery data. Particle swarm optimization, originally developed for global optimization over continuous spaces, is extended to deal with the problem of feature extraction. A formulation utilizing two swarms of particles was developed to optimize simultaneously a desired performance criterion and the number of selected features. Candidate feature sets were evaluated on a regression problem. Artificial neural networks were trained to construct linear and nonlinear models of chemical concentration of glucose in soybean crops. Experimental results utilizing real-world hyperspectral datasets demonstrate the viability of the method. The particle swarms-based approach presented superior performance in comparison with conventional feature extraction methods, on both linear and nonlinear models.},\n  Doi                      = {10.1093/ietisy/e90-d.7.1038},\n  Gsid                     = {IjCSPb-OGe4C},\n  Keywords                 = {Feature extraction, Particle swarm optimization, Hyperspectral data, Neural networks, Principal components analysis},\n  Publisher                = {The Institute of Electronics, Information and Communication Engineers},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_IEICETrans_2007.pdf}\n}\n\n
\n
\n\n\n
\n This paper presents a novel feature extraction algorithm based on particle swarms for processing hyperspectral imagery data. Particle swarm optimization, originally developed for global optimization over continuous spaces, is extended to deal with the problem of feature extraction. A formulation utilizing two swarms of particles was developed to optimize simultaneously a desired performance criterion and the number of selected features. Candidate feature sets were evaluated on a regression problem. Artificial neural networks were trained to construct linear and nonlinear models of chemical concentration of glucose in soybean crops. Experimental results utilizing real-world hyperspectral datasets demonstrate the viability of the method. The particle swarms-based approach presented superior performance in comparison with conventional feature extraction methods, on both linear and nonlinear models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Applying particle swarm intelligence for feature selection of spectral imagery.\n \n \n \n \n\n\n \n Monteiro, S. T.; and Kosugi, Y.\n\n\n \n\n\n\n In 7th International Conference on Intelligent Systems Design and Applications (ISDA), pages 933–938, Rio de Janeiro, Brazil, Oct. 2007. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"ApplyingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2007applying,\n  Title                    = {Applying particle swarm intelligence for feature selection of spectral imagery},\n  Author                   = {Monteiro, Sildomar T. and Kosugi, Yukio},\n  Booktitle                = {7th International Conference on Intelligent Systems Design and Applications (ISDA)},\n  Year                     = {2007},\n\n  Address                  = {Rio de Janeiro, Brazil},\n  Month                    = {Oct.},\n  Organization             = {IEEE},\n  Pages                    = {933--938},\n\n  Abstract                 = {Feature selection is necessary to reduce the dimensionality of spectral image data. Particle swarm optimization was originally developed to search only continuous spaces and, although many applications on discrete spaces had been proposed, it could not tackle the problem of feature selection directly. We developed a formulation utilizing two particles swarms in order to optimize a desired performance criterion and the number of selected features, simultaneously. Candidate feature sets were evaluated on a regression problem modeled using neural networks, which were trained to construct models of chemical concentration of glucose in soybeans. We present experimental results utilizing real-world spectral image data to attest the viability of the method. The particle swarms approach presented superior performance for linear modeling of chemical contents when compared to a conventional feature extraction method.},\n  Doi                      = {10.1109/ISDA.2007.95},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_ISDA_2007.pdf}\n}\n\n
\n
\n\n\n
\n Feature selection is necessary to reduce the dimensionality of spectral image data. Particle swarm optimization was originally developed to search only continuous spaces and, although many applications on discrete spaces had been proposed, it could not tackle the problem of feature selection directly. We developed a formulation utilizing two particles swarms in order to optimize a desired performance criterion and the number of selected features, simultaneously. Candidate feature sets were evaluated on a regression problem modeled using neural networks, which were trained to construct models of chemical concentration of glucose in soybeans. We present experimental results utilizing real-world spectral image data to attest the viability of the method. The particle swarms approach presented superior performance for linear modeling of chemical contents when compared to a conventional feature extraction method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A particle swarm optimization-based approach for hyperspectral band selection.\n \n \n \n \n\n\n \n Monteiro, S. T.; and Kosugi, Y.\n\n\n \n\n\n\n In IEEE Congress on Evolutionary Computation (CEC), pages 3335–3340, Singapore, Sep. 2007. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2007particle,\n  Title                    = {A particle swarm optimization-based approach for hyperspectral band selection},\n  Author                   = {Monteiro, Sildomar T. and Kosugi, Yukio},\n  Booktitle                = {IEEE Congress on Evolutionary Computation (CEC)},\n  Year                     = {2007},\n\n  Address                  = {Singapore},\n  Month                    = {Sep.},\n  Organization             = {IEEE},\n  Pages                    = {3335--3340},\n\n  Abstract                 = {In this paper, a feature selection algorithm based on particle swarm optimization for processing remotely acquired hyperspectral data is presented. Since particle swarm optimization was originally developed to search only continuous spaces, it could not deal with the problem of spectral band selection directly. We propose a method utilizing two swarms of particles in order to optimize simultaneously a desired performance criterion and the number of selected features. The candidate feature sets were evaluated on a regression problem using artificial neural networks to construct nonlinear models of chemical concentration of glucose in soybean crops. Experimental results attesting the viability of the method utilizing real- world hyperspectral data are presented. The particle swarm optimization-based approach presented superior performance in comparison with a conventional feature extraction method.},\n  Doi                      = {10.1109/CEC.2007.4424902},\n  Gsid                     = {qjMakFHDy7sC},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_CEC_2007.pdf}\n}\n\n
\n
\n\n\n
\n In this paper, a feature selection algorithm based on particle swarm optimization for processing remotely acquired hyperspectral data is presented. Since particle swarm optimization was originally developed to search only continuous spaces, it could not deal with the problem of spectral band selection directly. We propose a method utilizing two swarms of particles in order to optimize simultaneously a desired performance criterion and the number of selected features. The candidate feature sets were evaluated on a regression problem using artificial neural networks to construct nonlinear models of chemical concentration of glucose in soybean crops. Experimental results attesting the viability of the method utilizing real- world hyperspectral data are presented. The particle swarm optimization-based approach presented superior performance in comparison with a conventional feature extraction method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Prediction of sweetness and amino acid content in soybean crops from hyperspectral imagery.\n \n \n \n \n\n\n \n Monteiro, S. T.; Minekawa, Y.; Kosugi, Y.; Akazawa, T.; and Oda, K.\n\n\n \n\n\n\n ISPRS Journal of Photogrammetry and Remote Sensing, 62(1): 2–12. May 2007.\n \n\n\n\n
\n\n\n\n \n \n \"PredictionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{monteiro2007prediction,\n  Title                    = {Prediction of sweetness and amino acid content in soybean crops from hyperspectral imagery},\n  Author                   = {Monteiro, Sildomar T. and Minekawa, Yohei and Kosugi, Yukio and Akazawa, Tsuneya and Oda, Kunio},\n  Journal                  = {ISPRS Journal of Photogrammetry and Remote Sensing},\n  Year                     = {2007},\n\n  Month                    = {May},\n  Number                   = {1},\n  Pages                    = {2--12},\n  Volume                   = {62},\n\n  Abstract                 = {Hyperspectral image data provides a powerful tool for non-destructive crop analysis. This paper investigates a hyperspectral image data-processing method to predict the sweetness and amino acid content of soybean crops. Regression models based on artificial neural networks were developed in order to calculate the level of sucrose, glucose, fructose, and nitrogen concentrations, which can be related to the sweetness and amino acid content of vegetables. A performance analysis was conducted comparing regression models obtained using different preprocessing methods, namely, raw reflectance, second derivative, and principal components analysis. This method is demonstrated using high-resolution hyperspectral data of wavelengths ranging from the visible to the near infrared acquired from an experimental field of green vegetable soybeans. The best predictions were achieved using a nonlinear regression model of the second derivative transformed dataset. Glucose could be predicted with greater accuracy, followed by sucrose, fructose and nitrogen. The proposed method provides the possibility to provide relatively accurate maps predicting the chemical content of soybean crop fields.},\n  Doi                      = {10.1016/j.isprsjprs.2006.12.002},\n  Gsid                     = {u5HHmVD_uO8C},\n  Keywords                 = {Agriculture, Hyperspectral image, Modeling, Neural networks, Spatial prediction},\n  Publisher                = {Elsevier},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_JISPRS_2007.pdf}\n}\n\n
\n
\n\n\n
\n Hyperspectral image data provides a powerful tool for non-destructive crop analysis. This paper investigates a hyperspectral image data-processing method to predict the sweetness and amino acid content of soybean crops. Regression models based on artificial neural networks were developed in order to calculate the level of sucrose, glucose, fructose, and nitrogen concentrations, which can be related to the sweetness and amino acid content of vegetables. A performance analysis was conducted comparing regression models obtained using different preprocessing methods, namely, raw reflectance, second derivative, and principal components analysis. This method is demonstrated using high-resolution hyperspectral data of wavelengths ranging from the visible to the near infrared acquired from an experimental field of green vegetable soybeans. The best predictions were achieved using a nonlinear regression model of the second derivative transformed dataset. Glucose could be predicted with greater accuracy, followed by sucrose, fructose and nitrogen. The proposed method provides the possibility to provide relatively accurate maps predicting the chemical content of soybean crop fields.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2006\n \n \n (4)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Means and equipments for surgical viewing aid.\n \n \n \n\n\n \n Kosugi, Y.; Monteiro, S. T.; Uto, K.; and Watanabe, E\n\n\n \n\n\n\n 2006.\n [Abstract]\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Patent{Patent2006,\n  Title                    = {Means and equipments for surgical viewing aid},\n  Nationality              = {Japan},\n  Number                   = {US patent application 60/604,743, Japan Patent 2006-085688},\n  Year                     = {2006},\n  Yearfiled                = {2005},\n  Author                   = {Kosugi, Y. and Monteiro, Sildomar T. and Uto, K. and Watanabe, E},\n  Note                     = {<a href="http://www.directorypatent.com/JP/2006-085688.html" target="_blank">[Abstract]</a>},\n\n  Owner                    = {stmeee},\n  Timestamp                = {2014.06.10}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Prediction of sweetness and nitrogen content in soybean crops from high resolution hyperspectral imagery.\n \n \n \n \n\n\n \n Monteiro, S. T.; Minekawa, Y.; Kosugi, Y.; Akazawa, T.; and Oda, K.\n\n\n \n\n\n\n In IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 2263–2266, Denver, CO, Aug. 2006. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"PredictionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2006prediction,\n  Title                    = {Prediction of sweetness and nitrogen content in soybean crops from high resolution hyperspectral imagery},\n  Author                   = {Monteiro, Sildomar Takahashi and Minekawa, Yohei and Kosugi, Yukio and Akazawa, Tsuneya and Oda, Kunio},\n  Booktitle                = {IEEE International Geoscience and Remote Sensing Symposium (IGARSS)},\n  Year                     = {2006},\n\n  Address                  = {Denver, CO},\n  Month                    = {Aug.},\n  Organization             = {IEEE},\n  Pages                    = {2263--2266},\n\n  Abstract                 = {In this paper, we investigate a hyperspectral imagery data processing method to predict the sweetness and amino acid content of green vegetal soybean crops. Regression models based on neural networks were developed in order to calculate the level of sucrose, glucose, and nitrogen concentration, which can be related to sweetness and amino acid concentration of vegetables. We demonstrate the method using hyperspectral data of wavelengths ranging from the visible to the near infrared acquired from an experimental field of green vegetal soybeans. A performance analysis is reported comparing regression models built using datasets pre-processed using the first and second derivative analysis. The second derivative transformed dataset presented the best performance overall. Glucose could be predicted with greater accuracy.},\n  Doi                      = {10.1109/IGARSS.2006.585},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_IGARSS_2006.pdf}\n}\n\n
\n
\n\n\n
\n In this paper, we investigate a hyperspectral imagery data processing method to predict the sweetness and amino acid content of green vegetal soybean crops. Regression models based on neural networks were developed in order to calculate the level of sucrose, glucose, and nitrogen concentration, which can be related to sweetness and amino acid concentration of vegetables. We demonstrate the method using hyperspectral data of wavelengths ranging from the visible to the near infrared acquired from an experimental field of green vegetal soybeans. A performance analysis is reported comparing regression models built using datasets pre-processed using the first and second derivative analysis. The second derivative transformed dataset presented the best performance overall. Glucose could be predicted with greater accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High Resolution Hyperspectral Imagery for Estimating Sweetness Content in Soybean Crops.\n \n \n \n \n\n\n \n Monteiro, S. T.; Minekawa, Y.; Kosugi, Y.; Akazawa, T.; and Oda, K.\n\n\n \n\n\n\n In Institute of Electronics, Information and Communication Engineers General Conference (IEICE), volume BS-6-13, pages SE-24, 2006. \n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{sildomar2006high,\n  Title                    = {High Resolution Hyperspectral Imagery for Estimating Sweetness Content in Soybean Crops},\n  Author                   = {Monteiro, Sildomar T. and Minekawa, Yohei and Kosugi, Y.ukio and Akazawa, Tsuneya and Oda, Kunio},\n  Booktitle                = {Institute of Electronics, Information and Communication Engineers General Conference (IEICE)},\n  Year                     = {2006},\n  Pages                    = {SE-24},\n  Volume                   = {BS-6-13},\n\n  Journal                  = {Proceedings of the IEICE General Conference (Institute of Electronics, Information and Communication Engineers)},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_IEICE_2006.pdf}\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimization of Infrared Spectral Manipulation for Surgical Visual Aid.\n \n \n \n \n\n\n \n Monteiro, S. T.; Uto, K.; Kosugi, Y.; Kobayashi, N.; and Watanabe, E.\n\n\n \n\n\n\n Journal of Japan Society of Computer Aided Surgery, 8(1): 33–38. Jun. 2006.\n \n\n\n\n
\n\n\n\n \n \n \"OptimizationPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{monteiro2006optimization,\n  Title                    = {Optimization of Infrared Spectral Manipulation for Surgical Visual Aid},\n  Author                   = {Monteiro, Sildomar T. and Uto, Kuniaki and Kosugi, Yukio and Kobayashi, Nobuyuki and Watanabe, Eiju},\n  Journal                  = {Journal of Japan Society of Computer Aided Surgery},\n  Year                     = {2006},\n\n  Month                    = {Jun.},\n  Number                   = {1},\n  Pages                    = {33--38},\n  Volume                   = {8},\n\n  Abstract                 = {During a surgical procedure, blood covering the surgical field hinders the surgeon's visual inspection. We propose a novel application of hyperspectral imagery in the biomedical field. We conceived a method to exploit the capabilities of hyperspectral imaging systems in order to provide clearer images of areas covered by blood to the surgeon. We developed a neural network approach to generate a nonlinear combination of spectral reflectance bands in the near infrared region revealing images that could not be seeing in unprocessed images. The experimental results are compared with conventional image processing techniques. We present in vitro experiments using human blood and in situ experiments using guinea pigs to attest the validity of the proposed method.},\n  Gsid                     = {Tyk-4Ss8FVUC},\n  Keywords                 = {Image-guided surgery, Infrared medical imaging, Hyperspectral imagery, Neural networks, Spectral manipulation},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_JJSCAS_2006.pdf}\n}\n\n
\n
\n\n\n
\n During a surgical procedure, blood covering the surgical field hinders the surgeon's visual inspection. We propose a novel application of hyperspectral imagery in the biomedical field. We conceived a method to exploit the capabilities of hyperspectral imaging systems in order to provide clearer images of areas covered by blood to the surgeon. We developed a neural network approach to generate a nonlinear combination of spectral reflectance bands in the near infrared region revealing images that could not be seeing in unprocessed images. The experimental results are compared with conventional image processing techniques. We present in vitro experiments using human blood and in situ experiments using guinea pigs to attest the validity of the proposed method.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2005\n \n \n (2)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Robust Mobile Robot Map Building Using Sonar and Vision: Evolving the navigation abilities of the ApriAlpha home robot.\n \n \n \n \n\n\n \n Monteiro, S. T.; Nakamoto, H.; Ogawa, H.; Matsuhira, N.; and Matsuhira, N.\n\n\n \n\n\n\n In JSME Conference on Robotics and Mechatronics (ROBOMEC), volume 2P1-N-052, pages 1–4, Kobe, Japan, 2005. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2005robust,\n  Title                    = {Robust Mobile Robot Map Building Using Sonar and Vision: Evolving the navigation abilities of the ApriAlpha home robot},\n  Author                   = {Monteiro, Sildomar T. and Nakamoto, Hideichi and Ogawa, Hideki and Matsuhira, Nobuto and Matsuhira, Nobuto},\n  Booktitle                = {JSME Conference on Robotics and Mechatronics (ROBOMEC)},\n  Year                     = {2005},\n\n  Address                  = {Kobe, Japan},\n  Pages                    = {1--4},\n  Volume                   = {2P1-N-052},\n\n  Abstract                 = {We describe the development of a robust map acquisition method for mobile robots in indoor environments. A grid-based representation of the environment is derived from sonar sensor data, and, concurrently, corners and edges are detected and matched with visual landmarks in order to correct the robot pose estimation. We present experimental results of maps acquired using the ApriAlpha home robot.},\n  Journal                  = {Nippon Kikai Gakkai Robotikusu, Mekatoronikusu Koenkai Koen Ronbunshu (CD-ROM)},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_ROBOMEC_2005.pdf}\n}\n\n
\n
\n\n\n
\n We describe the development of a robust map acquisition method for mobile robots in indoor environments. A grid-based representation of the environment is derived from sonar sensor data, and, concurrently, corners and edges are detected and matched with visual landmarks in order to correct the robot pose estimation. We present experimental results of maps acquired using the ApriAlpha home robot.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feature Extraction of Hyperspectral Data for under Spilled Blood Visualization Using Particle Swarm Optimization.\n \n \n \n \n\n\n \n Monteiro, S. T.; Uto, K.; Kosugi, Y.; Kobayashi, N.; Watanabe, E.; and Kameyama, K.\n\n\n \n\n\n\n International Journal of Bioelectromagnetism, 7(1): 232–235. 2005.\n \n\n\n\n
\n\n\n\n \n \n \"FeaturePaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{monteiro2005feature,\n  Title                    = {Feature Extraction of Hyperspectral Data for under Spilled Blood Visualization Using Particle Swarm Optimization},\n  Author                   = {Monteiro, Sildomar T. and Uto, Kuniaki and Kosugi, Yukio and Kobayashi, Nobuyuki and Watanabe, Eiju and Kameyama, Keisuke},\n  Journal                  = {International Journal of Bioelectromagnetism},\n  Year                     = {2005},\n  Number                   = {1},\n  Pages                    = {232--235},\n  Volume                   = {7},\n\n  Abstract                 = {In this paper, an intraoperative application of a particle swarm optimization based feature extraction algorithm for hyperspectral imagery data visualization is proposed. The objective of the algorithm is to extract the features that generate the best visualization of an area covered by blood. The proposed method uses a binary version of a particle swarm optimizer to select a subset of band wavelengths in the near-infrared region, in which the optical absorption characteristics of blood allows some visual information to be extracted. A linear image transformation is subsequently applied to the selected features. Two transformation equations were tested, for the selection of three and four bands. The transformed image was then evaluated using four different fitness criteria: entropy, Euclidean distance, contrast and correlation. We present experimental results using human blood and an artificial background to validate the method and assess the fitness functions. The entropy better evaluated the amount of visual information under the layer of blood. The four-bands transformation produced higher fitness values and better visualization. The enhanced images of the extracted features revealed good visualizations under the layer of spilled blood.},\n  Gsid                     = {u-x6o8ySG0sC},\n  Keywords                 = {Biomedical imaging, feature extraction, hyperspectral imagery, particle swarm optimization},\n  Url                      = {http://www.ijbem.org/volume7/number1/pdf/061.pdf}\n}\n\n
\n
\n\n\n
\n In this paper, an intraoperative application of a particle swarm optimization based feature extraction algorithm for hyperspectral imagery data visualization is proposed. The objective of the algorithm is to extract the features that generate the best visualization of an area covered by blood. The proposed method uses a binary version of a particle swarm optimizer to select a subset of band wavelengths in the near-infrared region, in which the optical absorption characteristics of blood allows some visual information to be extracted. A linear image transformation is subsequently applied to the selected features. Two transformation equations were tested, for the selection of three and four bands. The transformed image was then evaluated using four different fitness criteria: entropy, Euclidean distance, contrast and correlation. We present experimental results using human blood and an artificial background to validate the method and assess the fitness functions. The entropy better evaluated the amount of visual information under the layer of blood. The four-bands transformation produced higher fitness values and better visualization. The enhanced images of the extracted features revealed good visualizations under the layer of spilled blood.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2004\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Towards applying hyperspectral imagery as an intraoperative visual aid tool.\n \n \n \n \n\n\n \n Monteiro, S. T.; Kosugi, Y.; Uto, K.; and Watanabe, E.\n\n\n \n\n\n\n In 4th IASTED International Conference on Visualization, Imaging and Image Processing (VIIP), pages 483-488, Marbella, Spain, Sep. 2004. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{Monteiro2004,\n  Title                    = {Towards applying hyperspectral imagery as an intraoperative visual aid tool},\n  Author                   = {Monteiro, Sildomar T. and Kosugi, Yukio and Uto, Kuniaki and Watanabe, Eiju},\n  Booktitle                = {4th IASTED International Conference on Visualization, Imaging and Image Processing (VIIP)},\n  Year                     = {2004},\n\n  Address                  = {Marbella, Spain},\n  Month                    = {Sep.},\n  Pages                    = {483-488},\n\n  Abstract                 = {During a surgery, the inevitable presence of blood covering the surgical field demands efforts to keep the area as clean as possible. A new hyperspectral data processing method is being developed to deliver clearer images to the surgeon. The analysis of optical absorption properties of the blood and water indicates that, between the visible and near infrared spectral regions, some valuable information under the blood layer may be obtained using a spectral imaging system. We propose a neural network approach to provide a nonlinear combination of spectral band reflectance in order to reveal images that could not be seeing with unprocessed images. This paper describes the implementation of single-layer and multi-layer perceptron architectures to perform the hyperspectral data processing. We present experimental results attesting the viability of the proposed method. We demonstrate that hyperspectral imagery can be exploited\nas visual aid for surgical guidance.},\n  Gsid                     = {Y0pCki6q_DkC},\n  Journal                  = {Proc. 4th International Conference on Visualization, Imaging and Image Processing, Marbella, Spain},\n  Keywords                 = {Hyperspectral imagery, neural networks, blood, surgical guidance, image manipulation, medical imaging},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_VIIP_2004.pdf}\n}\n\n
\n
\n\n\n
\n During a surgery, the inevitable presence of blood covering the surgical field demands efforts to keep the area as clean as possible. A new hyperspectral data processing method is being developed to deliver clearer images to the surgeon. The analysis of optical absorption properties of the blood and water indicates that, between the visible and near infrared spectral regions, some valuable information under the blood layer may be obtained using a spectral imaging system. We propose a neural network approach to provide a nonlinear combination of spectral band reflectance in order to reveal images that could not be seeing with unprocessed images. This paper describes the implementation of single-layer and multi-layer perceptron architectures to perform the hyperspectral data processing. We present experimental results attesting the viability of the proposed method. We demonstrate that hyperspectral imagery can be exploited as visual aid for surgical guidance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Desempenho de algoritmos de aprendizagem por reforço sob condições de ambiguidade sensorial em robótica móvel.\n \n \n \n \n\n\n \n Monteiro, S. T.; and Ribeiro, C. H.\n\n\n \n\n\n\n SBA: Controle & Automação, 15(3): 320–338. Jul. 2004.\n \n\n\n\n
\n\n\n\n \n \n \"DesempenhoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@Article{monteiro2004desempenho,\n  Title                    = {Desempenho de algoritmos de aprendizagem por refor{\\c{c}}o sob condi{\\c{c}}{\\~o}es de ambiguidade sensorial em rob{\\'o}tica m{\\'o}vel},\n  Author                   = {Monteiro, Sildomar T. and Ribeiro, Carlos H.C.},\n  Journal                  = {{SBA}: Controle \\& Automa{\\c{c}}{\\~a}o},\n  Year                     = {2004},\n\n  Month                    = {Jul.},\n  Number                   = {3},\n  Pages                    = {320--338},\n  Volume                   = {15},\n\n  Abstract                 = {We analyzed the performance variation of reinforcement learning algorithms in ambiguous state situations commonly caused by the low sensing capability of mobile robots. This variation is caused by violation of the Markov condition, which is important to guarantee convergence of these algorithms. Practical consequences of this violation in real systems are not firmly established in the literature. The algorithms assessed in this study were Q-learning, Sarsa and Q(l), and the experiments were performed on a Magellan Pro robot. A method to build variable resolution cognitive maps of the environment was implemented in order to provide realistic data for the experiments. The implemented learning algorithms presented satisfactory performance on real systems, with a graceful degradation of efficiency due to state ambiguity. The Q-learning algorithm accomplished the best performance, followed by the Sarsa algorithm. The Q(l) algorithm had its performance restrained by experimental parameters. The cognitive map learning method revealed to be quite efficient, allowing adequate algorithms assessment.},\n  Doi                      = {10.1590/S0103-17592004000300008},\n  Gsid                     = {d1gkVwhDpl0C},\n  Keywords                 = {Autonomous mobile robots, reinforcement learning, map learning, Neural networks},\n  Publisher                = {Sociedade Brasileira de Autom{\\'a}tica},\n  Url                      = {http://www.scielo.br/pdf/ca/v15n3/a08v15n3.pdf}\n}\n\n
\n
\n\n\n
\n We analyzed the performance variation of reinforcement learning algorithms in ambiguous state situations commonly caused by the low sensing capability of mobile robots. This variation is caused by violation of the Markov condition, which is important to guarantee convergence of these algorithms. Practical consequences of this violation in real systems are not firmly established in the literature. The algorithms assessed in this study were Q-learning, Sarsa and Q(l), and the experiments were performed on a Magellan Pro robot. A method to build variable resolution cognitive maps of the environment was implemented in order to provide realistic data for the experiments. The implemented learning algorithms presented satisfactory performance on real systems, with a graceful degradation of efficiency due to state ambiguity. The Q-learning algorithm accomplished the best performance, followed by the Sarsa algorithm. The Q(l) algorithm had its performance restrained by experimental parameters. The cognitive map learning method revealed to be quite efficient, allowing adequate algorithms assessment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards a surgical tool using hyperspectral imagery as visual aid.\n \n \n \n \n\n\n \n Monteiro, S. T; Uto, K.; Kosugi, Y.; and Watanabe, E.\n\n\n \n\n\n\n MICCAI Workshop: Augmented Environments for Medical Imaging and Computer-Aided Surgery (AMI-ARCS), Sep. 2004.\n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@Misc{monteiro2004towards,\n  Title                    = {Towards a surgical tool using hyperspectral imagery as visual aid},\n\n  Author                   = {Monteiro, Sildomar T and Uto, Kuniaki and Kosugi, Yukio and Watanabe, Eiju},\n  HowPublished             = {MICCAI Workshop: Augmented Environments for Medical Imaging and Computer-Aided Surgery (AMI-ARCS)},\n  Month                    = {Sep.},\n  Year                     = {2004},\n\n  Address                  = {Rennes, France},\n  Pages                    = {97--103},\n  Url                      = {http://ami2004.loria.fr/PAPERS/25ilhimonrta.pdf}\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2003\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Aprendizagem da navegação em robôs móveis a partir de mapas obtidos autonomamente.\n \n \n \n \n\n\n \n Monteiro, S. T.; and Ribeiro, C. H.\n\n\n \n\n\n\n In IV Encontro Nacional de Inteligência Artificial (ENIA), volume 1, pages 152–152, Campinas, Brazil, 2003. \n \n\n\n\n
\n\n\n\n \n \n \"AprendizagemPaper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@InProceedings{monteiro2003aprendizagem,\n  Title                    = {Aprendizagem da navega{\\c{c}}{\\~a}o em rob\\^{o}s m{\\'o}veis a partir de mapas obtidos autonomamente},\n  Author                   = {Monteiro, Sildomar Takahashi and Ribeiro, Carlos HC},\n  Booktitle                = {IV Encontro Nacional de Intelig{\\^e}ncia Artificial (ENIA)},\n  Year                     = {2003},\n\n  Address                  = {Campinas, Brazil},\n  Pages                    = {152--152},\n  Volume                   = {1},\n\n  Abstract                 = {We analyzed the performance of reinforcement learning algorithms in a navigation problem where maps were generated autonomously by a mobile Magellan Pro robot. The algorithms assessed in this study were Q-learning, Sarsa and Q(l), and a method to build variable resolution cognitive maps of the environment was implemented in order to create a performance verifier of the learning algorithms. The learning algorithms presented satisfactory performance, although with a graceful degradation of efficiency due to state ambiguity. The Q-learning algorithm accomplished the best performance over the experiments, followed by the Sarsa algorithm. The Q(l) algorithm had its performance restrained by experiments parameters. The cognitive map learning method revealed to be efficient and allowed\nadequate algorithms assessment.},\n  Journal                  = {Encontro Nacional de Inteligencia Artificial-XXIII Congresso da Sociedade Brasileira de Computa{\\c{c}}{\\~a}o},\n  Url                      = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_ENIA_2003.pdf}\n}\n\n
\n
\n\n\n
\n We analyzed the performance of reinforcement learning algorithms in a navigation problem where maps were generated autonomously by a mobile Magellan Pro robot. The algorithms assessed in this study were Q-learning, Sarsa and Q(l), and a method to build variable resolution cognitive maps of the environment was implemented in order to create a performance verifier of the learning algorithms. The learning algorithms presented satisfactory performance, although with a graceful degradation of efficiency due to state ambiguity. The Q-learning algorithm accomplished the best performance over the experiments, followed by the Sarsa algorithm. The Q(l) algorithm had its performance restrained by experiments parameters. The cognitive map learning method revealed to be efficient and allowed adequate algorithms assessment.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2002\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Obtenção de mapas cognitivos para o robô móvel Magellan Pro.\n \n \n \n \n\n\n \n Monteiro, S. T.; and Ribeiro, C. H.\n\n\n \n\n\n\n In XIV Congresso Brasileiro de Automática (CBA), pages 1543-1548, Natal, Brazil, 2002. \n \n\n\n\n
\n\n\n\n \n \n \"ObtençãoPaper\n  \n \n \n \"Obtenção citedby\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@InProceedings{monteiro2002obtenccao,\n  author      = {Monteiro, Sildomar T. and Ribeiro, Carlos H.C.},\n  title       = {Obten{\\c{c}}{\\~a}o de mapas cognitivos para o rob\\^{o} m{\\'o}vel Magellan Pro},\n  booktitle   = {XIV Congresso Brasileiro de Autom{\\'a}tica (CBA)},\n  year        = {2002},\n  pages       = {1543-1548},\n  url         = {https://github.com/sildomar/sildomar.github.io/raw/master/files/Monteiro_CBA_2002.pdf},\n  abstract    = {This paper presents the implementation of a method for the efficient learning of cognitive maps of indoor environments on a Magellan Pro mobile robot. The obtained world model is constituted by partitions of variable size representing perceptually homogeneous regions of the environment. The method used is a combination of metric and topological paradigms and produces a map of rectangular partitions of low complexity, based on the assumption that obstacles are parallel or perpendicular to each other. A neural network is used to interpret sonar readings. Experimental results show that the method allows learning of a reliable map of the environment for the Magellan Pro robot, using solely sonar readings.},\n  address     = {Natal, Brazil},\n  journal     = {Anais do XII CBA},\n  keywords    = {Autonomous mobile robots, map learning, neural networks, occupancy grid, robotics, topological graph},\n  url_citedby = {https://scholar.google.com/scholar?cites=9767360953387033821},\n}\n\n
\n
\n\n\n
\n This paper presents the implementation of a method for the efficient learning of cognitive maps of indoor environments on a Magellan Pro mobile robot. The obtained world model is constituted by partitions of variable size representing perceptually homogeneous regions of the environment. The method used is a combination of metric and topological paradigms and produces a map of rectangular partitions of low complexity, based on the assumption that obstacles are parallel or perpendicular to each other. A neural network is used to interpret sonar readings. Experimental results show that the method allows learning of a reliable map of the environment for the Magellan Pro robot, using solely sonar readings.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n undefined\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n .\n \n \n \n\n\n \n \n\n\n \n\n\n\n . .\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 9 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);