var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=http%3A%2F%2Fhoneine.fr%2Fpaul%2Fbiblio_ph.bib&jsonp=1&group0=year&order=type&theme=dividers&owner=none&nocache=1,fullnames=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=http%3A%2F%2Fhoneine.fr%2Fpaul%2Fbiblio_ph.bib&jsonp=1&group0=year&order=type&theme=dividers&owner=none&nocache=1,fullnames=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=http%3A%2F%2Fhoneine.fr%2Fpaul%2Fbiblio_ph.bib&jsonp=1&group0=year&order=type&theme=dividers&owner=none&nocache=1,fullnames=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2024\n \n \n (3)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Spatial Interpolation and Conditional Map Generation using Deep Image Prior for Environmental Applications.\n \n \n \n\n\n \n Rakotonirina, H.; Honeine, P.; Atteia, O.; and Exem, A. V.\n\n\n \n\n\n\n Mathematical Geoscience. January 2024.\n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{24.deepprior,\n   title = {Spatial Interpolation and Conditional Map Generation using Deep Image Prior for Environmental Applications},\n   journal = {Mathematical Geoscience},\n   year = {2024},\n   author =  "Herbert Rakotonirina and Paul Honeine and Olivier Atteia and Antonin Van Exem",\n    month = jan,\n    doi = {10.1007/s11004-023-10125-2},\n   keywords = {Geostatistics, Deep learning, Environmental data, Kriging, Soil pollution, Geostatistical conditional simulation},\n   abstract = {Kriging is the most widely used spatial interpolation method in geostatistics. For many environmental applications, kriging may have to satisfy the stationarity and isotropy hypothesis, and new techniques using machine learning suffer from a lack of labeled data. In this paper, we propose to use Deep Image Prior, which is a U-net-like deep neural network designed for image reconstruction, to perform spatial interpolation and conditional map generation without any prior learning. This approach allows to overcome the assumptions for kriging, as well as the lack of labeled data, while proposing uncertainty and probability above a certain threshold. The proposed method is based on a convolutional neural network that generates a map from random values by minimizing the difference between the output map and the observed values. From this new method of spatial interpolation, we generate n maps o have a map of uncertainty and a map of probability of exceeding the threshold. The conducted experiments demonstrate the relevance of the proposed methods for spatial interpolation, on both the well-known digital elevation model data and the more challenging case of pollution mapping. The obtained results with the three datasets demonstrate the competitive performance compared with state-of-the-art methods.},\n}\n%   url_link = {https://normandie-univ.hal.science/hal-04231805},\n%   url_paper  =  "https://normandie-univ.hal.science/hal-04231805/file/Final_version_fisheye.pdf",\n\n   \n   \n
\n
\n\n\n
\n Kriging is the most widely used spatial interpolation method in geostatistics. For many environmental applications, kriging may have to satisfy the stationarity and isotropy hypothesis, and new techniques using machine learning suffer from a lack of labeled data. In this paper, we propose to use Deep Image Prior, which is a U-net-like deep neural network designed for image reconstruction, to perform spatial interpolation and conditional map generation without any prior learning. This approach allows to overcome the assumptions for kriging, as well as the lack of labeled data, while proposing uncertainty and probability above a certain threshold. The proposed method is based on a convolutional neural network that generates a map from random values by minimizing the difference between the output map and the observed values. From this new method of spatial interpolation, we generate n maps o have a map of uncertainty and a map of probability of exceeding the threshold. The conducted experiments demonstrate the relevance of the proposed methods for spatial interpolation, on both the well-known digital elevation model data and the more challenging case of pollution mapping. The obtained results with the three datasets demonstrate the competitive performance compared with state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Estimating Contaminated Soil Volumes Using a Generative Neural Network: A Hydrocarbon Case in France.\n \n \n \n\n\n \n Rakotonirina, H.; Honeine, P.; Atteia, O.; and Exem, A. V.\n\n\n \n\n\n\n In Proc. 15th International Conference on Geostatistics for Environmental Applications (geoENV), Chania, Crete, Greece, 19 - 19 June 2024. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{24.geoenv,\n   author =  "Herbert Rakotonirina and Paul Honeine and Olivier Atteia and Antonin Van Exem",\n   title =  "Estimating Contaminated Soil Volumes Using a Generative Neural Network: A Hydrocarbon Case in France",\n   booktitle =  "Proc. 15th International Conference on Geostatistics for Environmental Applications (geoENV)",\n   address =  "Chania, Crete, Greece",\n   year  =  "2024",\n   month =  "19 - 19~" # jun,\n   acronym =  "geoENV",\n   abstract = "The estimation of the volumes of contaminated soil to be treated is a crucial step in soil remediation. Numerous techniques exist for estimating the distribution of pollutants in soils, such as inverse distance weighting, kriging, Gaussian sequential simulation, and sequential indicator simulation. Unfortunately, these methods require significant computational resources to achieve precise estimations. Moreover, both kriging and Gaussian simulation require the transformation of non-normal distributions, often seen in hydrocarbon contamination, to produce accurate results. In response, we propose a generative neural network to generate 3D maps of contaminant distributions without prior training, This differentiates it from other Deep Learning approaches that necessitate training data. The approach relies on a convolutional neural network for image reconstruction method.Rather than solely depending on the concentration of chemicals determined in the laboratory, we utilize hyperspectral imaging data from soil cores to achieve a more precise depiction of soil contaminants. We have assessed this approach using a real case of hydrocarbon pollution on a polluted site in France. The method has shown competitve performance with controlled computation times thanks to the utilization of a GPU accelerator. Our study offers a new, practical way to improve soil pollution management using fast, and data-based techniques.",\n}\n%   url_link  =  "https://normandie-univ.hal.science/hal-04194187",\n %  url_paper  =  "https://normandie-univ.hal.science/hal-04194187/file/23.gretsi.krigeage.pdf",\n\n\n
\n
\n\n\n
\n The estimation of the volumes of contaminated soil to be treated is a crucial step in soil remediation. Numerous techniques exist for estimating the distribution of pollutants in soils, such as inverse distance weighting, kriging, Gaussian sequential simulation, and sequential indicator simulation. Unfortunately, these methods require significant computational resources to achieve precise estimations. Moreover, both kriging and Gaussian simulation require the transformation of non-normal distributions, often seen in hydrocarbon contamination, to produce accurate results. In response, we propose a generative neural network to generate 3D maps of contaminant distributions without prior training, This differentiates it from other Deep Learning approaches that necessitate training data. The approach relies on a convolutional neural network for image reconstruction method.Rather than solely depending on the concentration of chemicals determined in the laboratory, we utilize hyperspectral imaging data from soil cores to achieve a more precise depiction of soil contaminants. We have assessed this approach using a real case of hydrocarbon pollution on a polluted site in France. The method has shown competitve performance with controlled computation times thanks to the utilization of a GPU accelerator. Our study offers a new, practical way to improve soil pollution management using fast, and data-based techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Contrastive learning for regression on hyperspectral data.\n \n \n \n \n\n\n \n Dhaini, M.; Berar, M.; Honeine, P.; and Exem, A. V.\n\n\n \n\n\n\n In Proc. 49th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Korea, 14 - 19 April 2024. \n \n\n\n\n
\n\n\n\n \n \n \"Contrastive paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{24.icassp,\n   author =  "Mohamad Dhaini and Maxime Berar and Paul Honeine and Antonin Van Exem",\n   title =  "Contrastive learning for regression on hyperspectral data",\n   booktitle =  "Proc. 49th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Seoul, Korea",\n   year  =  "2024",\n   month =  "14 - 19~" # apr,\n   acronym =  "ICASSP",\n   url_paper   =  "https://normandie-univ.hal.science/hal-04360616",\n   abstract = "Contrastive learning has demonstrated great effectiveness in representation learning especially for image classification tasks. However, there is still a shortage in the studies targeting regression tasks, and more specifically applications on hyperspectral data. In this paper, we propose a contrastive learning framework for the regression tasks for hyperspectral data. To this end, we provide a collection of transformations relevant for augmenting hyperspectral data, and investigate contrastive learning for regression. Experiments on synthetic and real hyperspectral datasets show that the proposed framework and transformations significantly improve the performance of regression models, achieving better scores than other state-of-the-art transformations.",\n}\n%pages\n%   doi=\n%    url_link= "https://ieeexplore.ieee.org/document/...........",\n%    \n\n\n\n
\n
\n\n\n
\n Contrastive learning has demonstrated great effectiveness in representation learning especially for image classification tasks. However, there is still a shortage in the studies targeting regression tasks, and more specifically applications on hyperspectral data. In this paper, we propose a contrastive learning framework for the regression tasks for hyperspectral data. To this end, we provide a collection of transformations relevant for augmenting hyperspectral data, and investigate contrastive learning for regression. Experiments on synthetic and real hyperspectral datasets show that the proposed framework and transformations significantly improve the performance of regression models, achieving better scores than other state-of-the-art transformations.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2023\n \n \n (11)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Fully Residual Unet-based Semantic Segmentation of Automotive Fisheye Images: a Comparison of Rectangular and Deformable Convolutions.\n \n \n \n \n\n\n \n El Jurdi, R.; Sekkat, A. R.; Dupuis, Y.; Vasseur, P.; and Honeine, P.\n\n\n \n\n\n\n Multimedia Tools and Applications. October 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Fully link\n  \n \n \n \"Fully paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{23.fisheye,\n   title = {Fully Residual Unet-based Semantic Segmentation of Automotive Fisheye Images: a Comparison of Rectangular and Deformable Convolutions},\n   journal = {Multimedia Tools and Applications},\n   year = {2023},\n   author =  "Rosana {El Jurdi} and Ahmed Rida Sekkat and Yohan Dupuis and Pascal Vasseur and Paul Honeine",\n    month = oct,\n   doi = {10.1007/s11042-023-16627-9},\n   url_link = {https://normandie-univ.hal.science/hal-04231805},\n   url_paper  =  "https://normandie-univ.hal.science/hal-04231805/file/Final_version_fisheye.pdf",\n   keywords = {Fisheye image segmentation, Multi-view data augmentation, Deformable convolutions, Deep convolutional neural networks},\n   abstract = {Semantic image segmentation is an essential task for autonomous vehicles and self-driving cars where a complete and real-time perception of the surroundings is mandatory. Convolutional Neural Network approaches for semantic segmentation standout over other state-of-the-art solutions due to their powerful generalization ability over unknown data and end-to-end training. Fisheye images are important due to their large field of view and ability to reveal information from broader surroundings. Nevertheless, they pose unique challenges for CNNs, due to object distortion resulting from the Fisheye lens and object position. In addition, large annotated Fisheye datasets required for CNN training is rather limited.  In this paper, we investigate the use of Deformable convolutions in accommodating distortions within Fisheye image segmentation for fully residual U-net by learning unknown geometric transformations via variable shaped and sized filters. The proposed models and integration strategies are exploited within two main paradigms: single(front)-view and multi-view Fisheye images segmentation. The validation of the proposed methods is conducted on synthetic and real Fisheye images from the WoodScape and the SynWoodScape datasets.  The results validate the significance of the Deformable fully residual U-Net structure in learning unknown geometric distortions in both paradigms, demonstrate the possibility in learning view-agnostic distortion properties when trained on the multi-view data and shed light on the role of surround-view images in increasing segmentation performance relative to the single view. Finally, our experiments suggests that Deformable convolutions are a powerful tool that can increase the efficiency of fully residual U-Nets for semantic segmentation of automotive fisheye images.},\n}\n\n\n
\n
\n\n\n
\n Semantic image segmentation is an essential task for autonomous vehicles and self-driving cars where a complete and real-time perception of the surroundings is mandatory. Convolutional Neural Network approaches for semantic segmentation standout over other state-of-the-art solutions due to their powerful generalization ability over unknown data and end-to-end training. Fisheye images are important due to their large field of view and ability to reveal information from broader surroundings. Nevertheless, they pose unique challenges for CNNs, due to object distortion resulting from the Fisheye lens and object position. In addition, large annotated Fisheye datasets required for CNN training is rather limited. In this paper, we investigate the use of Deformable convolutions in accommodating distortions within Fisheye image segmentation for fully residual U-net by learning unknown geometric transformations via variable shaped and sized filters. The proposed models and integration strategies are exploited within two main paradigms: single(front)-view and multi-view Fisheye images segmentation. The validation of the proposed methods is conducted on synthetic and real Fisheye images from the WoodScape and the SynWoodScape datasets. The results validate the significance of the Deformable fully residual U-Net structure in learning unknown geometric distortions in both paradigms, demonstrate the possibility in learning view-agnostic distortion properties when trained on the multi-view data and shed light on the role of surround-view images in increasing segmentation performance relative to the single view. Finally, our experiments suggests that Deformable convolutions are a powerful tool that can increase the efficiency of fully residual U-Nets for semantic segmentation of automotive fisheye images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Indoor Scene Recognition Mechanism Based on Direction-Driven Convolutional Neural Networks.\n \n \n \n \n\n\n \n Daou, A.; Pothin, J.; Honeine, P.; and Bensrhair, A.\n\n\n \n\n\n\n Sensors, 23(12): 110439. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Indoor link\n  \n \n \n \"Indoor paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{23.loc_indoor,\n   author =  "Andrea Daou and Jean-Baptiste Pothin and Paul Honeine and Abdelaziz Bensrhair",\n   title =  "Indoor Scene Recognition Mechanism Based on Direction-Driven Convolutional Neural Networks",\n   journal =  "Sensors",\n   year  =  "2023",\n   volume = {23},\n   number = "12",\n   article-number = "5672",\n   pages = {110439},\n   issn = {1424-8220},\n   url_link = {https://www.mdpi.com/1424-8220/23/12/5672},\n   url_paper  =  "https://www.mdpi.com/1424-8220/23/12/5672/pdf",\n   doi = {10.3390/s23125672},\n      keywords  =  "Domain Adaptation, Transfer Learning, Regression, Deep Learning, Dictionary Learning, Sparse Coding",\n   abstract = "Indoor location-based services constitute an important part of our daily lives, providing position and direction information about people or objects in indoor spaces. These systems can be useful in security and monitoring applications that target specific areas such as rooms. Vision-based scene recognition is the task of accurately identifying a room category from a given image. Despite years of research in this field, scene recognition remains an open problem due to the different and complex places in the real world. Indoor environments are relatively complicated because of layout variability, object and decoration complexity, and multiscale and viewpoint changes. In this paper, we propose a room-level indoor localization system based on deep learning and built-in smartphone sensors combining visual information with smartphone magnetic heading. The user can be room-level localized while simply capturing an image with a smartphone. The presented indoor scene recognition system is based on direction-driven convolutional neural networks (CNNs) and therefore contains multiple CNNs, each tailored for a particular range of indoor orientations. We present particular weighted fusion strategies that improve system performance by properly combining the outputs from different CNN models. To meet users' needs and overcome smartphone limitations, we propose a hybrid computing strategy based on mobile computation offloading compatible with the proposed system architecture. The implementation of the scene recognition system is split between the user's smartphone and a server, which aids in meeting the computational requirements of CNNs. Several experimental analysis were conducted, including to assess performance and provide a stability analysis. The results obtained on a real dataset show the relevance of the proposed approach for localization, as well as the interest in model partitioning in hybrid mobile computation offloading. Our extensive evaluation demonstrates an increase in accuracy compared to traditional CNN scene recognition, indicating the effectiveness and robustness of our approach.",\n}\n\n
\n
\n\n\n
\n Indoor location-based services constitute an important part of our daily lives, providing position and direction information about people or objects in indoor spaces. These systems can be useful in security and monitoring applications that target specific areas such as rooms. Vision-based scene recognition is the task of accurately identifying a room category from a given image. Despite years of research in this field, scene recognition remains an open problem due to the different and complex places in the real world. Indoor environments are relatively complicated because of layout variability, object and decoration complexity, and multiscale and viewpoint changes. In this paper, we propose a room-level indoor localization system based on deep learning and built-in smartphone sensors combining visual information with smartphone magnetic heading. The user can be room-level localized while simply capturing an image with a smartphone. The presented indoor scene recognition system is based on direction-driven convolutional neural networks (CNNs) and therefore contains multiple CNNs, each tailored for a particular range of indoor orientations. We present particular weighted fusion strategies that improve system performance by properly combining the outputs from different CNN models. To meet users' needs and overcome smartphone limitations, we propose a hybrid computing strategy based on mobile computation offloading compatible with the proposed system architecture. The implementation of the scene recognition system is split between the user's smartphone and a server, which aids in meeting the computational requirements of CNNs. Several experimental analysis were conducted, including to assess performance and provide a stability analysis. The results obtained on a real dataset show the relevance of the proposed approach for localization, as well as the interest in model partitioning in hybrid mobile computation offloading. Our extensive evaluation demonstrates an increase in accuracy compared to traditional CNN scene recognition, indicating the effectiveness and robustness of our approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised Domain Adaptation for Regression Using Dictionary Learning.\n \n \n \n \n\n\n \n Dhaini, M.; Berar, M.; Honeine, P.; and Exem, A. V.\n\n\n \n\n\n\n Knowledge-Based Systems, 267: 110439. 2023.\n \n\n\n\n
\n\n\n\n \n \n \"Unsupervised link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{23.DomainAdapt,\n   author =  "Mohamad Dhaini and Maxime Berar and Paul Honeine and Antonin Van Exem",\n   title =  "Unsupervised Domain Adaptation for Regression Using Dictionary Learning",\n   journal =  "Knowledge-Based Systems",\n   year  =  "2023",\n   volume = {267},\n   pages = {110439},\n   issn = {0950-7051},\n   url_link = {https://www.sciencedirect.com/science/article/pii/S0950705123001892},\n   doi = {10.1016/j.knosys.2023.110439},\n      keywords  =  "Domain Adaptation, Transfer Learning, Regression, Deep Learning, Dictionary Learning, Sparse Coding",\n   abstract = "Unsupervised domain adaptation aims to generalize the knowledge learned on a labeled source domain across an unlabeled target domain. Most of existing unsupervised approaches are feature-based methods that seek to find domain invariant features. Despite their wide applications, these approaches proved to have some limitations especially in regression tasks. In this paper, we study the problem of unsupervised domain adaptation for regression tasks. We highlight the obstacles faced in regression compared to a classification task in terms of sensitivity to the scattering of data in feature space. We take this issue and propose a new unsupervised domain adaptation approach based on dictionary learning. We seek to learn a dictionary on source data and follow an optimal direction trajectory to minimize the residue of the reconstruction of the target data with the same dictionary. For stable training of a neural network, we provide a robust implementation of a projected gradient descent dictionary learning framework, which allows to have a backpropagation friendly end-to-end method. Experimental results show that the proposed method outperforms significantly most of state-of-the-art methods on several well-known benchmark datasets, especially when transferring knowledge from synthetic to real domains.",\n}\n%   url_paper  =  "https://www.mdpi.com/2072-4292/14/14/3341/pdf",\n\n\n
\n
\n\n\n
\n Unsupervised domain adaptation aims to generalize the knowledge learned on a labeled source domain across an unlabeled target domain. Most of existing unsupervised approaches are feature-based methods that seek to find domain invariant features. Despite their wide applications, these approaches proved to have some limitations especially in regression tasks. In this paper, we study the problem of unsupervised domain adaptation for regression tasks. We highlight the obstacles faced in regression compared to a classification task in terms of sensitivity to the scattering of data in feature space. We take this issue and propose a new unsupervised domain adaptation approach based on dictionary learning. We seek to learn a dictionary on source data and follow an optimal direction trajectory to minimize the residue of the reconstruction of the target data with the same dictionary. For stable training of a neural network, we provide a robust implementation of a projected gradient descent dictionary learning framework, which allows to have a backpropagation friendly end-to-end method. Experimental results show that the proposed method outperforms significantly most of state-of-the-art methods on several well-known benchmark datasets, especially when transferring knowledge from synthetic to real domains.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Bridging Distinct Spaces in Graph-based Machine Learning.\n \n \n \n\n\n \n Jia, L.; Ning, X.; Gaüzère, B.; Honeine, P.; and Riesen, K.\n\n\n \n\n\n\n In Blumenstein, M.; Lu, H.; Yang, W.; and Cho, S., editor(s), Proceedings of the 7th Asian Conference on Pattern Recognition (ACPR), Kitakyushu, Japan, 5 - 8 November 2023. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{23.acpr,\n   author =  "Linlin Jia and Xiao Ning and Benoît Gaüzère and Paul Honeine and Kaspar Riesen",\n   title =  "Bridging Distinct Spaces in Graph-based Machine Learning",\n   booktitle =  "Proceedings of the 7th Asian Conference on Pattern Recognition (ACPR)",\n   editor = "Michael Blumenstein and Huimin Lu and Wankou Yang and Sung-Bae Cho",   \n   address =  "Kitakyushu, Japan",\n   year  =  "2023",\n   month =  "5 - 8~" # nov,\n   acronym =  "ACPR",\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Hyperspectral characterization of soil matrix effects by coupling physical models and machine learning methods.\n \n \n \n\n\n \n Feray, C.; Jacquemoud, S.; Honeine, P.; and Exem, A. V.\n\n\n \n\n\n\n Poster at the 13th IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), Athens, Greece, 31 October–2 November 2023.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{23.whispers,\n  title={Hyperspectral characterization of soil matrix effects by coupling physical models and machine learning methods},\n  author="Corentin Feray and St\\'ephane Jacquemoud and Paul Honeine and Antonin Van Exem",\n  howpublished = "Poster at the 13th IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), Athens, Greece",\n   year =  "2023",\n   month =   "31~" # oct # "--" # "2~" # nov,\n} %keywords = "Hyperspectral imaging, Soil pollution, Matrix effects, Physics-based models, Data-driven models",\n\n\n%   keywords  =  "machine learning, deep learning",\n%   publisher="Springer International Publishing",\n%   pages="216--226",\n%   doi = "10.1007/978-3-030-73973-7_21",\n%   url_paper  =  "http://honeine.fr/paul/publi/21.s+sspr.graphpreimage.pdf",\n%   url_slides= "http://honeine.fr/paul/publi/21.s+sspr.graphpreimage.slides.pdf",\n%   url_video="https://youtu.be/GnzBXM3L3pQ",\n%   isbn="978-3-030-73973-7",\n%   abstract = "The pre-image problem for graphs is increasingly attracting attention owing to many promising applications. However, it is a challenging problem due to the complexity of graph structure. In this paper, we propose a novel method to construct graph pre-images as median graphs, by aligning graph edit distances (GEDs) in the graph space with distances in the graph kernel space. The two metrics are aligned by optimizing the edit costs of GEDs according to the distances between the graphs within the space associated with a particular graph kernel. Then, the graph pre-image can be estimated using a median graph method founded on the GED. In particular, a recently introduced method to compute generalized median graphs with iterative alternate minimizations is revisited for this purpose. Conducted experiments show very promising results while opening the computation of graph pre-image to any graph kernel and to graphs with non-symbolic attributes.",\n\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Normalizing Flows to Pre-image Free Machine Learning for Regression.\n \n \n \n \n\n\n \n Glédel, C.; Gaüzère, B.; and Honeine, P.\n\n\n \n\n\n\n In 13th IAPR-TC15 International Workshop on Graph-Based Representations in Pattern Recognition, volume 14121, Vietri sul Mare, Salerno, Italy, 6 - 8 September 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Graph link\n  \n \n \n \"Graph paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{23.GbR,\n   author =  "Clément Glédel and Benoît Gaüzère and Paul Honeine",\n   title =  "Graph Normalizing Flows to Pre-image Free Machine Learning for Regression",\n   booktitle =  "13th IAPR-TC15 International Workshop on Graph-Based Representations in Pattern Recognition",\n   address =  "Vietri sul Mare, Salerno, Italy",\n   year =  "2023",\n   volume = {14121},\n   month =  "6 - 8~" # sep,\n   keywords =  "Graph Normalizing Flow, Pre-image problem, Regression, Interpretability, Nonlinear embedding",\n   acronym =  "GbR",\n   url_link = "https://link.springer.com/chapter/10.1007/978-3-031-42795-4_9",\n   url_paper = "https://hal.science/hal-04189301",\nabstract = "In Machine Learning, data embedding is a fundamental aspect of creating nonlinear models. However, they often lack interpretability due to the limited access to the embedding space, also called latent space. As a result, it is highly desirable to represent, in the input space, elements from the embedding space. Nevertheless, obtaining the inverse embedding is a challenging task, and it involves solving the hard pre-image problem. This task becomes even more challenging when dealing with structured data like graphs, which are complex and discrete by nature. This article presents a novel approach for graph regression using Normalizing Flows (NFs), in order to avoid the pre-image problem. By creating a latent representation space using a NF, the method overcomes the difficulty of finding an inverse transformation. The approach aims at supervising the space generation process in order to create a space suitable for the specific regression task. Furthermore, any result obtained in the generated space can be translated into the input space through the application of the inverse transformation learned by the model. The effectiveness of our approach is demonstrated by using a NF model on different regression problems. We validate the ability of the method to efficiently handle both the pre-image generation and the regression task.",\n}\n\n\n
\n
\n\n\n
\n In Machine Learning, data embedding is a fundamental aspect of creating nonlinear models. However, they often lack interpretability due to the limited access to the embedding space, also called latent space. As a result, it is highly desirable to represent, in the input space, elements from the embedding space. Nevertheless, obtaining the inverse embedding is a challenging task, and it involves solving the hard pre-image problem. This task becomes even more challenging when dealing with structured data like graphs, which are complex and discrete by nature. This article presents a novel approach for graph regression using Normalizing Flows (NFs), in order to avoid the pre-image problem. By creating a latent representation space using a NF, the method overcomes the difficulty of finding an inverse transformation. The approach aims at supervising the space generation process in order to create a space suitable for the specific regression task. Furthermore, any result obtained in the generated space can be translated into the input space through the application of the inverse transformation learned by the model. The effectiveness of our approach is demonstrated by using a NF model on different regression problems. We validate the ability of the method to efficiently handle both the pre-image generation and the regression task.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Apprentissage contrastif pour l'adaptation de domaine en régression.\n \n \n \n \n\n\n \n Dhaini, M.; Berar, M.; Honeine, P.; and Exem, A. V.\n\n\n \n\n\n\n In Actes du 29-ème Colloque GRETSI sur le Traitement du Signal et des Images, Grenoble, France, 28 August–1 September 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Apprentissage link\n  \n \n \n \"Apprentissage paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{23.gretsi.adaptation,\n   author =  "Mohamad Dhaini and Maxime Berar and Paul Honeine and Antonin Van Exem",\n   title =  "Apprentissage contrastif pour l'adaptation de domaine en régression",\n   booktitle =  "Actes du 29-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Grenoble, France",\n   year  =  "2023",\n   month =  "28~" # aug # "--" # "1~" # sep,\n   acronym =  "GRETSI'23",\n   url_link = "https://normandie-univ.hal.science/hal-04194190",\n   url_paper  =  "https://normandie-univ.hal.science/hal-04194190/file/23.gretsi.adaptation.pdf",\n   abstract = "L'adaptation de domaine non supervisée relève le défi d'utiliser des modèles d'apprentissage statistique sur des données de distribution différente de celle des données d'entraînement. Cela impose d'apprendre des représentations efficaces qui peuvent être généralisées à travers les domaines. Dans cet article, nous étudions l'utilisation de l'apprentissage contrastif pour améliorer les approches d'adaptation de domaine. À cette fin, l'apprentissage contrastif est appliqué à l'espace latent d'un réseau de neurones, où l'objectif est d'apprendre une représentation qui maximise la similitude entre des exemples similaires et minimise la similitude entre des exemples dissemblables. En outre, pour minimiser l'écart entre les domaines source et cible, le procédé utilise l'apprentissage par dictionnaire, où les dictionnaires sont extraits à la fois des données source et cible et la trajectoire entre les deux dictionnaires est minimisée. La méthode proposée est évaluée sur le jeu de données dSprites, montrant de meilleures performances que les méthodes de l'état de l'art.",\n}\n\n
\n
\n\n\n
\n L'adaptation de domaine non supervisée relève le défi d'utiliser des modèles d'apprentissage statistique sur des données de distribution différente de celle des données d'entraînement. Cela impose d'apprendre des représentations efficaces qui peuvent être généralisées à travers les domaines. Dans cet article, nous étudions l'utilisation de l'apprentissage contrastif pour améliorer les approches d'adaptation de domaine. À cette fin, l'apprentissage contrastif est appliqué à l'espace latent d'un réseau de neurones, où l'objectif est d'apprendre une représentation qui maximise la similitude entre des exemples similaires et minimise la similitude entre des exemples dissemblables. En outre, pour minimiser l'écart entre les domaines source et cible, le procédé utilise l'apprentissage par dictionnaire, où les dictionnaires sont extraits à la fois des données source et cible et la trajectoire entre les deux dictionnaires est minimisée. La méthode proposée est évaluée sur le jeu de données dSprites, montrant de meilleures performances que les méthodes de l'état de l'art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interpolation spatiale avec un réseau de neurones génératif comme alternative au krigeage.\n \n \n \n \n\n\n \n Rakotonirina, H.; Honeine, P.; Atteia, O.; and Exem, A. V.\n\n\n \n\n\n\n In Actes du 29-ème Colloque GRETSI sur le Traitement du Signal et des Images, Grenoble, France, 28 August–1 September 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Interpolation link\n  \n \n \n \"Interpolation paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{23.gretsi.krigeage,\n   author =  "Herbert Rakotonirina and Paul Honeine and Olivier Atteia and Antonin Van Exem",\n   title =  "Interpolation spatiale avec un réseau de neurones génératif comme alternative au krigeage",\n   booktitle =  "Actes du 29-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Grenoble, France",\n   year  =  "2023",\n   month =  "28~" # aug # "--" # "1~" # sep,\n   acronym =  "GRETSI'23",\n   url_link  =  "https://normandie-univ.hal.science/hal-04194187",\n   url_paper  =  "https://normandie-univ.hal.science/hal-04194187/file/23.gretsi.krigeage.pdf",\n   abstract = "En géosciences, les méthodes d’interpolation spatiale peuvent être divisées en géostatistiques, non-géostatistiques ou hybrides. Le krigeage est une méthode couramment utilisée en géostatistique, sous l’hypothèse d’une distribution normale des données. De plus, il peut être très gourmand en ressources lorsqu’il est utilisé pour réaliser une interpolation avec un volume de données conséquent. Les méthodes non-géostatistiques ont bénéficié des avancées récentes des Réseaux Antagonistes Génératifs (GAN), mais elles exigent une quantité importante de données étiquetées pour produire des résultats performants. Les approches hybrides sont limitées de part leurs dépendances aux contraintes associées aux approches géostatistiques. Dans cet article, nous proposons une nouvelle méthode d’interpolation spatiale non-géostatistique par apprentissage profond, en se basant sur une technique de reconstruction d’image sans entraînement au préalable, permettant ainsi de surmonter les limites des GAN. Notre méthode utilise des connexions résiduelles et un sur-échantillonnage bi-cubique dans le but d’adapter la technique de reconstruction d’image à notre application. Elle s’appuie sur un réseau de neurones convolutifs pour produire une carte à partir d’une carte de valeurs aléatoires, en réduisant la différence entre la carte générée et les valeurs observées. L’approche proposée est évaluée sur un jeu de données de modèle numérique de terrain selon deux méthodes d’échantillonnage différentes : régulière et aléatoire. Les résultats montrent des performances supérieures par rapport à l’état de l’art des méthodes l’interpolation.",\n}\n\n
\n
\n\n\n
\n En géosciences, les méthodes d’interpolation spatiale peuvent être divisées en géostatistiques, non-géostatistiques ou hybrides. Le krigeage est une méthode couramment utilisée en géostatistique, sous l’hypothèse d’une distribution normale des données. De plus, il peut être très gourmand en ressources lorsqu’il est utilisé pour réaliser une interpolation avec un volume de données conséquent. Les méthodes non-géostatistiques ont bénéficié des avancées récentes des Réseaux Antagonistes Génératifs (GAN), mais elles exigent une quantité importante de données étiquetées pour produire des résultats performants. Les approches hybrides sont limitées de part leurs dépendances aux contraintes associées aux approches géostatistiques. Dans cet article, nous proposons une nouvelle méthode d’interpolation spatiale non-géostatistique par apprentissage profond, en se basant sur une technique de reconstruction d’image sans entraînement au préalable, permettant ainsi de surmonter les limites des GAN. Notre méthode utilise des connexions résiduelles et un sur-échantillonnage bi-cubique dans le but d’adapter la technique de reconstruction d’image à notre application. Elle s’appuie sur un réseau de neurones convolutifs pour produire une carte à partir d’une carte de valeurs aléatoires, en réduisant la différence entre la carte générée et les valeurs observées. L’approche proposée est évaluée sur un jeu de données de modèle numérique de terrain selon deux méthodes d’échantillonnage différentes : régulière et aléatoire. Les résultats montrent des performances supérieures par rapport à l’état de l’art des méthodes l’interpolation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Normalizing Flows pour éviter le problème de pré-image.\n \n \n \n \n\n\n \n Glédel, C.; Gaüzère, B.; and Honeine, P.\n\n\n \n\n\n\n In Actes du 29-ème Colloque GRETSI sur le Traitement du Signal et des Images, Grenoble, France, 28 August–1 September 2023. \n \n\n\n\n
\n\n\n\n \n \n \"Normalizing link\n  \n \n \n \"Normalizing paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{23.gretsi_nf,\n   author =  "Clément Glédel and Benoît Gaüzère and Paul Honeine",\n   title =  "Normalizing Flows pour éviter le problème de pré-image",\n   booktitle =  "Actes du 29-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Grenoble, France",\n   year  =  "2023",\n   month =  "28~" # aug # "--" # "1~" # sep,\n   acronym =  "GRETSI'23",\n    url_link = "https://hal.science/hal-04189355",\n    url_paper  = "https://hal.science/hal-04189355/file/GRETSI.pdf",\n   abstract = "Les méthodes d’apprentissage statistique génèrent souvent un espace de représentation dans lequel la tâche de classification ou de régression peut être effectuée efficacement. Alors qu’une telle transformation des données est en général non-inversible, il peut être intéressant de faire le chemin inverse afin d’interpréter les résultats dans l’espace des observations. Ce problème, dit de pré-image, est un problème difficile, mal posé, notamment quand les transformations sont implicites (méthodes à noyaux) ou non linéaires multiples (réseaux de neurones profonds). Dans cet article, nous proposons une méthode de classification ne souffrant pas de ce problème permettant ainsi une meilleure interprétabilité des représentations. La méthode proposée repose sur une nouvelle famille de méthodes génératives que sont les Normalizing Flows. Les expérimentations montrent de bons résultats de classification et d’interprétabilité.",\n}\n\n\n\n
\n
\n\n\n
\n Les méthodes d’apprentissage statistique génèrent souvent un espace de représentation dans lequel la tâche de classification ou de régression peut être effectuée efficacement. Alors qu’une telle transformation des données est en général non-inversible, il peut être intéressant de faire le chemin inverse afin d’interpréter les résultats dans l’espace des observations. Ce problème, dit de pré-image, est un problème difficile, mal posé, notamment quand les transformations sont implicites (méthodes à noyaux) ou non linéaires multiples (réseaux de neurones profonds). Dans cet article, nous proposons une méthode de classification ne souffrant pas de ce problème permettant ainsi une meilleure interprétabilité des représentations. La méthode proposée repose sur une nouvelle famille de méthodes génératives que sont les Normalizing Flows. Les expérimentations montrent de bons résultats de classification et d’interprétabilité.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n High-resolution characterization of total hydrocarbons by infrared hyperspectral imaging in an alluvial soil.\n \n \n \n\n\n \n Exem, A. V.; Kassem, P.; Honeine, P.; and Mignot, M.\n\n\n \n\n\n\n In NICOLE Fall Workshop 2023 (Innovative solutions for sustainable redevelopment and land stewardship of contaminated sites and sediments), Malmö, Sweden, 24 - 25 October 2023. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{23.nicole,\n  title={High-resolution characterization of total hydrocarbons by infrared hyperspectral imaging in an alluvial soil},\n  author="Antonin Van Exem and Philippe Kassem and Paul Honeine and Mélanie Mignot",\n  booktitle={NICOLE Fall Workshop 2023 (Innovative solutions for sustainable redevelopment and land stewardship of contaminated sites and sediments)},\n   address =  "Malmö, Sweden",\n   year =  "2023",\n   month =  "24 - 25~" # oct,\n   abstract = "The NICOLE Fall Workshop 2023, co-organized with  Nätverket Renare Mark in Malmö, Sweden, brings together key players involved in innovative projects for the remediation and sustainable management of polluted sites and soils, in various sectors including real estate, transport infrastructure and mining.\nOur work presented in this workshop describes the case of an R\\&D project carried out in partnership between Tellux and TotalEnergies.\nTotalEnergies opted for a full-scale on-site test of the HyperScan by Tellux to characterize the physical and chemical parameters of the soil. The Tellux teams mobilized their expertise, deploying their tool to scan several dozen linear meters of cores under the hyperspectral camera. The results presented by Tellux opened up new insights for understanding the characterized site. The TotalEnergies and Tellux teams maintain a dynamic collaboration to create a tailor-made tool, perfectly aligned with the needs and expectations of industry and their environmental partners.",\n}\n\n\n
\n
\n\n\n
\n The NICOLE Fall Workshop 2023, co-organized with Nätverket Renare Mark in Malmö, Sweden, brings together key players involved in innovative projects for the remediation and sustainable management of polluted sites and soils, in various sectors including real estate, transport infrastructure and mining. Our work presented in this workshop describes the case of an R&D project carried out in partnership between Tellux and TotalEnergies. TotalEnergies opted for a full-scale on-site test of the HyperScan by Tellux to characterize the physical and chemical parameters of the soil. The Tellux teams mobilized their expertise, deploying their tool to scan several dozen linear meters of cores under the hyperspectral camera. The results presented by Tellux opened up new insights for understanding the characterized site. The TotalEnergies and Tellux teams maintain a dynamic collaboration to create a tailor-made tool, perfectly aligned with the needs and expectations of industry and their environmental partners.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Caractérisation hyperspectrale des effets de matrice de sol par couplage de modèles physiques et de méthodes d'apprentissage automatique.\n \n \n \n\n\n \n Feray, C.; Jacquemoud, S.; Honeine, P.; and Exem, A. V.\n\n\n \n\n\n\n In 8ème colloque scientifique du Groupe Hyperspectral de la Société Française de Photogrammétrie et de Télédétection, Paris, France, 5 - 6 July 2023. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{23.sfpt,\n  title={Caractérisation hyperspectrale des effets de matrice de sol par couplage de modèles physiques et de méthodes d'apprentissage automatique},\n  author="Corentin Feray and St\\'ephane Jacquemoud and Paul Honeine and Antonin Van Exem",\n  booktitle={8ème colloque scientifique du Groupe Hyperspectral de la Société Française de Photogrammétrie et de Télédétection},\n   address =  "Paris, France",\n   year =  "2023",\n   month =  "5 - 6~" # jul,\n   abstract = "Ce travail vise à améliorer les méthodes de traitement d’images hyperspectrales pour la détection de polluants dans un sol. Ces méthodes permettent d'interpréter le signal radiométrique mesuré sur chaque pixel de l’image en termes de propriétés physico- chimiques du sol observé. Du fait de la résolution spatiale limitée des caméras hyperspectrales, le spectre de réflectance correspondant à un pixel est un mélange de spectres supposés purs de différents matériaux appelés endmembers. En pratique, ces spectres ne le sont pas en raison de la variabilité spectrale intrinsèque des matériaux, des effets de matrice tels que la teneur en eau ou la granulométrie, et des paramètres environnementaux de la scène observée.\nPour traiter les images hyperspectrales, on distingue deux grandes familles de modèles : ceux basés sur la physique et ceux basés sur les données (dits data-driven). Les modèles physiques permettent de comprendre la variabilité spectrale des données mais nécessitent de connaître les propriétés optiques intrinsèques des constituants des sols qui sont généralement inconnues. Les modèles basés sur les données cherchent des corrélations entre les spectres de réflectance et les caractéristiques physico-chimiques des sols mais, leur côté “boîte noire” les rend souvent difficilement interprétables et peu généralisables.\nEn combinant les avantages des modèles physiques et des modèles basés sur les données, cette thèse a pour objectif d’améliorer l’interprétabilité, la généralisation et la robustesse des méthodes de traitement d’image hyperspectrales aux différentes sources de variabilité spectrale. L’accent sera mis sur les effets de matrices liés à la teneur en eau et à la granulométrie des sols, qui sont parmi les sources de variabilité les plus importantes et les plus contraignantes dans l’analyse de la pollution des sols.",\n}\n\n\n
\n
\n\n\n
\n Ce travail vise à améliorer les méthodes de traitement d’images hyperspectrales pour la détection de polluants dans un sol. Ces méthodes permettent d'interpréter le signal radiométrique mesuré sur chaque pixel de l’image en termes de propriétés physico- chimiques du sol observé. Du fait de la résolution spatiale limitée des caméras hyperspectrales, le spectre de réflectance correspondant à un pixel est un mélange de spectres supposés purs de différents matériaux appelés endmembers. En pratique, ces spectres ne le sont pas en raison de la variabilité spectrale intrinsèque des matériaux, des effets de matrice tels que la teneur en eau ou la granulométrie, et des paramètres environnementaux de la scène observée. Pour traiter les images hyperspectrales, on distingue deux grandes familles de modèles : ceux basés sur la physique et ceux basés sur les données (dits data-driven). Les modèles physiques permettent de comprendre la variabilité spectrale des données mais nécessitent de connaître les propriétés optiques intrinsèques des constituants des sols qui sont généralement inconnues. Les modèles basés sur les données cherchent des corrélations entre les spectres de réflectance et les caractéristiques physico-chimiques des sols mais, leur côté “boîte noire” les rend souvent difficilement interprétables et peu généralisables. En combinant les avantages des modèles physiques et des modèles basés sur les données, cette thèse a pour objectif d’améliorer l’interprétabilité, la généralisation et la robustesse des méthodes de traitement d’image hyperspectrales aux différentes sources de variabilité spectrale. L’accent sera mis sur les effets de matrices liés à la teneur en eau et à la granulométrie des sols, qui sont parmi les sources de variabilité les plus importantes et les plus contraignantes dans l’analyse de la pollution des sols.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2022\n \n \n (11)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n A Study On The Stability of Graph Edit Distance Heuristics.\n \n \n \n \n\n\n \n Jia, L.; Tognetti, V.; Joubert, L.; Gaüzère, B.; and Honeine, P.\n\n\n \n\n\n\n Electronics, 11(20). 2022.\n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{22.ged,\n   author =  " Linlin Jia and Vincent Tognetti and Laurent Joubert and Benoît Gaüzère and Paul Honeine",\n   title =  {A Study On The Stability of Graph Edit Distance Heuristics},\n   journal = "Electronics",\n   year  =  "2022",\n   volume = "11",\n   number = {20},\n   article-number = {3312},\n   issn = {2079-9292},\n   url_link = "https://www.mdpi.com/2079-9292/11/20/3312",\n   url_paper   =  "https://www.mdpi.com/2079-9292/11/20/3312/pdf?version=1665746650",\n   doi = {10.3390/electronics11203312},\n   keywords  =  "Graph edit distance, stability analysis, heuristic methods, edit cost learning",\n   abstract = "Graph edit distance (GED) is a powerful tool to model the dissimilarity between graphs. However, evaluating the exact GED is NP-hard. To tackle this problem, estimation methods of GED were introduced, e.g., bipartite and IPFP, during which heuristics were employed. The stochastic nature of these methods induces the stability issue. In this paper, we propose the first formal study of stability of GED heuristics, starting with defining a measure of these (in)stabilities, namely the relative error. Then, the effects of two critical factors on stability are examined, namely, the number of solutions and the ratio between edit costs. The ratios are computed on five datasets of various properties. General suggestions are provided to properly choose these factors, which can reduce the relative error by more than an order of magnitude. Finally, we verify the relevance of stability to predict performance of GED heuristics, by taking advantage of an edit cost learning algorithm to optimize the performance and the k-nearest neighbor regression for prediction. Experiments show that the optimized costs correspond to much higher ratios and an order of magnitude lower relative errors than the expert cost."\n}\n\n\n
\n
\n\n\n
\n Graph edit distance (GED) is a powerful tool to model the dissimilarity between graphs. However, evaluating the exact GED is NP-hard. To tackle this problem, estimation methods of GED were introduced, e.g., bipartite and IPFP, during which heuristics were employed. The stochastic nature of these methods induces the stability issue. In this paper, we propose the first formal study of stability of GED heuristics, starting with defining a measure of these (in)stabilities, namely the relative error. Then, the effects of two critical factors on stability are examined, namely, the number of solutions and the ratio between edit costs. The ratios are computed on five datasets of various properties. General suggestions are provided to properly choose these factors, which can reduce the relative error by more than an order of magnitude. Finally, we verify the relevance of stability to predict performance of GED heuristics, by taking advantage of an edit cost learning algorithm to optimize the performance and the k-nearest neighbor regression for prediction. Experiments show that the optimized costs correspond to much higher ratios and an order of magnitude lower relative errors than the expert cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n End-to-end Convolutional Autoencoder for Nonlinear Hyperspectral Unmixing.\n \n \n \n \n\n\n \n Dhaini, M.; Berar, M.; Honeine, P.; and Exem, A. V.\n\n\n \n\n\n\n Remote Sensing, 14(14). July 2022.\n \n\n\n\n
\n\n\n\n \n \n \"End-to-end link\n  \n \n \n \"End-to-end paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{22.hyperspectralAE,\n   author =  " Mohamad Dhaini and Maxime Berar and Paul Honeine and Antonin Van Exem",\n   title =  "End-to-end Convolutional Autoencoder for Nonlinear Hyperspectral \nUnmixing",\n   journal =  "Remote Sensing",\n   year  =  "2022",\n   month =  jul,\n   doi = {10.3390/rs14143341},\n   issn = {2072-4292},\n   volume = {14},\n   number = {14},\n   article-number = {3341},\n   url_link = {https://www.mdpi.com/2072-4292/14/14/3341/htm},\n   url_paper  =  "https://www.mdpi.com/2072-4292/14/14/3341/pdf",\n   keywords  =  "Convolutional neural network, autoencoder, hyperspectral imaging, nonlinear spectral unmixing",\n   abstract = "Hyperspectral Unmixing is the process of decomposing a mixed pixel into its pure materials (endmembers) and estimating their corresponding proportions (abundances). Although linear unmixing models are more common due to their simplicity and flexibility, they suffer from many limitations in real world scenes where interactions between pure materials exist which paved the way for nonlinear methods to emerge. However, existing methods for nonlinear unmixing require prior knowledge or an assumption about the type of nonlinearity which can affect the results. This paper introduces a nonlinear method with a novel deep convolutional autoencoder for blind unmixing. The proposed framework consists of a deep encoder of successive small size convolutional filters along with max pooling layers, and a decoder composed of successive 2-D and 1-D convolutional filters. The output of the decoder is formed of a linear part and an additive non linear one. The network is trained using the mean squared error loss function. Several experiments were conducted to evaluate the performance of the proposed method using synthetic and real airborne data. Results show a better performance in terms of abundance and endmembers estimation compared to several existing methods.",\n}\n\n\n
\n
\n\n\n
\n Hyperspectral Unmixing is the process of decomposing a mixed pixel into its pure materials (endmembers) and estimating their corresponding proportions (abundances). Although linear unmixing models are more common due to their simplicity and flexibility, they suffer from many limitations in real world scenes where interactions between pure materials exist which paved the way for nonlinear methods to emerge. However, existing methods for nonlinear unmixing require prior knowledge or an assumption about the type of nonlinearity which can affect the results. This paper introduces a nonlinear method with a novel deep convolutional autoencoder for blind unmixing. The proposed framework consists of a deep encoder of successive small size convolutional filters along with max pooling layers, and a decoder composed of successive 2-D and 1-D convolutional filters. The output of the decoder is formed of a linear part and an additive non linear one. The network is trained using the mean squared error loss function. Several experiments were conducted to evaluate the performance of the proposed method using synthetic and real airborne data. Results show a better performance in terms of abundance and endmembers estimation compared to several existing methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving.\n \n \n \n \n\n\n \n Sekkat, A. R.; Dupuis, Y.; Kumar, V. R.; Rashed, H.; Yogamani, S.; Vasseur, P.; and Honeine, P.\n\n\n \n\n\n\n IEEE Robotics and Automation Letters, 7(3): 8502-8509. July 2022.\n \n\n\n\n
\n\n\n\n \n \n \"SynWoodScape: link\n  \n \n \n \"SynWoodScape: paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{22.SynWoodScape,\n   author =  "Ahmed Rida Sekkat and Yohan Dupuis and Varun Ravi Kumar and Hazem Rashed and Senthil Yogamani and Pascal Vasseur and Paul Honeine",\n   title =  "{SynWoodScape}: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving",\n   journal =  "IEEE Robotics and Automation Letters",\n   year  =  "2022",\n   month = jul,\n   volume = {7},\n   number = {3},\n   pages = "8502-8509",\n   doi = {10.1109/LRA.2022.3188106},\n   issn = {2377-3766},\n   url_link = {https://arxiv.org/abs/2203.05056},\n   url_paper = {https://arxiv.org/pdf/2203.05056.pdf},   \n   keywords  =  "Fisheye Cameras, Omni-directional Vision, Automated Driving, Synthetic Datasets",\n   abstract = "Surround-view cameras are a primary sensor for automated driving, used for near field perception. It is one of the most commonly used sensors in commercial vehicles. Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts."\n}\n%   url_link = {...},\n%   url_code  =  "https://codeocean.com/capsule/9810141/tree/v1",\n%   pages = "",\n%   volume = "189",\n\n\n\n\n
\n
\n\n\n
\n Surround-view cameras are a primary sensor for automated driving, used for near field perception. It is one of the most commonly used sensors in commercial vehicles. Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Omnidirectional images and Semantic Segmentation : A comparative study from a motorcycle perspective.\n \n \n \n \n\n\n \n Sekkat, A. R.; Dupuis, Y.; Honeine, P.; and Vasseur, P.\n\n\n \n\n\n\n Scientific Reports, 12: 4968. March 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Omnidirectional paper\n  \n \n \n \"Omnidirectional link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{22.OmnidirectionalComparative,\n   author =  "Ahmed Rida Sekkat and Yohan Dupuis and Paul Honeine and Pascal Vasseur",\n   title =  "Omnidirectional images and Semantic Segmentation : A comparative study from a motorcycle perspective",\n   journal =  "Scientific Reports",\n   year  =  "2022",\n   month = mar,\n   isbn = {2045-2322},\n   pages = "4968",\n   volume = "12",\n   issue = "1",   \n   doi = {10.1038/s41598-022-08466-9},\n   url_paper = {https://www.nature.com/articles/s41598-022-08466-9.pdf},\n   url_link = {https://www.nature.com/articles/s41598-022-08466-9},\n   keywords  =  "Fisheye Cameras, Omni-directional Vision, Automated Driving, Synthetic Datasets",\n   abstract = "Surround-view cameras are a primary sensor for automated driving, used for near field perception. It is one of the most commonly used sensors in commercial vehicles. Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts."\n}\n%   url_code  =  "https://codeocean.com/capsule/9810141/tree/v1",\n\n\n
\n
\n\n\n
\n Surround-view cameras are a primary sensor for automated driving, used for near field perception. It is one of the most commonly used sensors in commercial vehicles. Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph kernels based on linear patterns: theoretical and experimental comparisons.\n \n \n \n \n\n\n \n Jia, L.; Gaüzère, B.; and Honeine, P.\n\n\n \n\n\n\n Expert Systems With Applications, 189: 116095. March 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Graph paper\n  \n \n \n \"Graph link\n  \n \n \n \"Graph code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{22.graphkernels,\n   author =  "Linlin Jia and Benoît Gaüzère and Paul Honeine",\n   title =  {Graph kernels based on linear patterns: theoretical and experimental comparisons},\n   journal = "Expert Systems With Applications",\n   pages = "116095",\n   year  =  "2022",\n   issn = {0957-4174},\n   month = mar,\n   volume = "189",\n   doi = {10.1016/j.eswa.2021.116095},\n   url_paper   =  "https://normandie-univ.hal.science/hal-03410508",\n   url_link = {https://www.sciencedirect.com/science/article/pii/S0957417421014299},\n   url_code  =  "https://codeocean.com/capsule/9810141/tree/v1",\n   keywords  =  "Machine learning, Graph Kernels, Walks, Paths, Kernel methods, Graph representation, Linear Patterns, Python Implementation",\n   abstract = "Graph kernels are powerful tools to bridge the gap between machine learning and data encoded as graphs. Most graph kernels are based on the decomposition of graphs into a set of patterns. The similarity between two graphs is then deduced to the similarity between corresponding patterns. Kernels based on linear patterns constitute a good trade-off between accuracy and computational complexity. In this work, we propose a thorough investigation and comparison of graph kernels based on different linear patterns, namely walks and paths. First, all these kernels are explored in detail, including their mathematical foundations, structures of patterns and computational complexity. After that, experiments are performed on various benchmark datasets exhibiting different types of graphs, including labeled and unlabeled graphs, graphs with different numbers of vertices, graphs with different average vertex degrees, linear and non-linear graphs. Finally, for regression and classification tasks, accuracy and computational complexity of these kernels are compared and analyzed, in the light of baseline kernels based on non-linear patterns. Suggestions are proposed to choose kernels according to the types of graph datasets. This work leads to a clear comparison of strengths and weaknesses of these kernels. An open-source Python library containing an implementation of all discussed kernels is publicly available on GitHub to the community, thus allowing to promote and facilitate the use of graph kernels in machine learning problems."\n}\n\n
\n
\n\n\n
\n Graph kernels are powerful tools to bridge the gap between machine learning and data encoded as graphs. Most graph kernels are based on the decomposition of graphs into a set of patterns. The similarity between two graphs is then deduced to the similarity between corresponding patterns. Kernels based on linear patterns constitute a good trade-off between accuracy and computational complexity. In this work, we propose a thorough investigation and comparison of graph kernels based on different linear patterns, namely walks and paths. First, all these kernels are explored in detail, including their mathematical foundations, structures of patterns and computational complexity. After that, experiments are performed on various benchmark datasets exhibiting different types of graphs, including labeled and unlabeled graphs, graphs with different numbers of vertices, graphs with different average vertex degrees, linear and non-linear graphs. Finally, for regression and classification tasks, accuracy and computational complexity of these kernels are compared and analyzed, in the light of baseline kernels based on non-linear patterns. Suggestions are proposed to choose kernels according to the types of graph datasets. This work leads to a clear comparison of strengths and weaknesses of these kernels. An open-source Python library containing an implementation of all discussed kernels is publicly available on GitHub to the community, thus allowing to promote and facilitate the use of graph kernels in machine learning problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n SynWoodScape: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving.\n \n \n \n \n\n\n \n Sekkat, A. R.; Dupuis, Y.; Kumar, V. R.; Rashed, H.; Yogamani, S.; Vasseur, P.; and Honeine, P.\n\n\n \n\n\n\n In Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022), 20 October 2022. \n \n\n\n\n
\n\n\n\n \n \n \"SynWoodScape: link\n  \n \n \n \"SynWoodScape: paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@inproceedings{22.iros.SynWoodScape,\n   author =  "Ahmed Rida Sekkat and Yohan Dupuis and Varun Ravi Kumar and Hazem Rashed and Senthil Yogamani and Pascal Vasseur and Paul Honeine",\n   title =  "{SynWoodScape}: Synthetic Surround-view Fisheye Camera Dataset for Autonomous Driving",\n   booktitle =  "Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022)",\n   year  =  "2022",\n   month = "20~" # oct,\n   url_link = {https://arxiv.org/abs/2203.05056},\n   url_paper = {https://arxiv.org/pdf/2203.05056.pdf},\n   keywords  =  "Fisheye Cameras, Omni-directional Vision, Automated Driving, Synthetic Datasets",\n   abstract = "Surround-view cameras are a primary sensor for automated driving, used for near field perception. It is one of the most commonly used sensors in commercial vehicles. Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts."\n}\n\n
\n
\n\n\n
\n Surround-view cameras are a primary sensor for automated driving, used for near field perception. It is one of the most commonly used sensors in commercial vehicles. Four fisheye cameras with a 190° field of view cover the 360° around the vehicle. Due to its high radial distortion, the standard algorithms do not extend easily. Previously, we released the first public fisheye surround-view dataset named WoodScape. In this work, we release a synthetic version of the surround-view dataset, covering many of its weaknesses and extending it. Firstly, it is not possible to obtain ground truth for pixel-wise optical flow and depth. Secondly, WoodScape did not have all four cameras simultaneously in order to sample diverse frames. However, this means that multi-camera algorithms cannot be designed, which is enabled in the new dataset. We implemented surround-view fisheye geometric projections in CARLA Simulator matching WoodScape's configuration and created SynWoodScape. We release 80k images from the synthetic dataset with annotations for 10+ tasks. We also release the baseline code and supporting scripts.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Contrôle d’un système multi-CNN via le cap magnétique du smartphone pour la reconnaissance de scènes indoor.\n \n \n \n \n\n\n \n Daou, A.; Pothin, J.; Honeine, P.; and Bensrhair, A.\n\n\n \n\n\n\n In Actes du 28-ème Colloque GRETSI sur le Traitement du Signal et des Images, Nancy, France, 6 - 9 September 2022. \n \n\n\n\n
\n\n\n\n \n \n \"Contrôle paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{22.gretsi.indoor,\n   author =  "Andrea Daou and Jean-Baptiste Pothin and Paul Honeine and Abdelaziz Bensrhair",\n   title =  "Contrôle d’un système multi-CNN via le cap magnétique du smartphone pour la reconnaissance de scènes indoor",\n   booktitle =  "Actes du 28-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Nancy, France",\n   year  =  "2022",\n   month =  "6 - 9~" # sep,\n   acronym =  "GRETSI'22",\n   url_paper  =  "http://honeine.fr/paul/publi/22.gretsi.indoor.pdf",\n   abstract = "En vision par ordinateur, la reconnaissance de scènes indoor consiste à identifier une pièce à partir d’une image. La difficulté majeure réside dans la complexité élevée des environnements intérieurs par rapport à ceux extérieurs. Le présent article propose un système de classification de scènes indoor basé sur les capteurs intégrés dans les smartphones. La méthode proposée repose sur la combinaison des informations visuelles et du cap magnétique du smartphone. Le système comporte plusieurs CNN directionnels, chacun spécifiques pour une gamme définie d’orientations et guidés par le cap magnétique de la caméra du smartphone. L’utilisateur est localisé niveau-pièce en capturant simplement une image avec un smartphone. Les performances du système sont validées par des expérimentations sur un jeu de données réelles.",\n}\n\n
\n
\n\n\n
\n En vision par ordinateur, la reconnaissance de scènes indoor consiste à identifier une pièce à partir d’une image. La difficulté majeure réside dans la complexité élevée des environnements intérieurs par rapport à ceux extérieurs. Le présent article propose un système de classification de scènes indoor basé sur les capteurs intégrés dans les smartphones. La méthode proposée repose sur la combinaison des informations visuelles et du cap magnétique du smartphone. Le système comporte plusieurs CNN directionnels, chacun spécifiques pour une gamme définie d’orientations et guidés par le cap magnétique de la caméra du smartphone. L’utilisateur est localisé niveau-pièce en capturant simplement une image avec un smartphone. Les performances du système sont validées par des expérimentations sur un jeu de données réelles.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptation de domaine en régression par alignement de décompositions non-négatives.\n \n \n \n \n\n\n \n Dhaini, M.; Berar, M.; Honeine, P.; and Exem, A. V.\n\n\n \n\n\n\n In Actes du 28-ème Colloque GRETSI sur le Traitement du Signal et des Images, Nancy, France, 6 - 9 September 2022. \n \n\n\n\n
\n\n\n\n \n \n \"Adaptation paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{22.gretsi.adaptation,\n   author =  " Mohamad Dhaini and Maxime Berar and Paul Honeine and Antonin Van Exem",\n   title =  "Adaptation de domaine en régression par alignement de décompositions non-négatives",\n   booktitle =  "Actes du 28-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Nancy, France",\n   year  =  "2022",\n   month =  "6 - 9~" # sep,\n   acronym =  "GRETSI'22",\n   url_paper  =  "http://honeine.fr/paul/publi/22.gretsi.adaptation.pdf",\n   abstract = "Domain Adaptation methods seek to generalize the knowledge learned on a labeled source domain across another unlabeled target domain. Most of the deep learning methods for domain adaptation address the classification task, while regression models are still one step behind with some positive results in a shallow framework. Existing deep models for regression adaptation tasks rely on aligning the eigenvectors of both source and target data. This process, although providing satisfying results, is however unstable and not a backpropagation-friendly process. In this paper, we present a novel deep adaptation model based on aligning the non-negative sub-spaces derived from source and target domains. Removing the orthogonality constraints makes the model more stable for training. The proposed method is evaluated on a domain adaptation regression benchmark. Results show competitive performance compared to state-of-the-art models.",\n}\n\n
\n
\n\n\n
\n Domain Adaptation methods seek to generalize the knowledge learned on a labeled source domain across another unlabeled target domain. Most of the deep learning methods for domain adaptation address the classification task, while regression models are still one step behind with some positive results in a shallow framework. Existing deep models for regression adaptation tasks rely on aligning the eigenvectors of both source and target data. This process, although providing satisfying results, is however unstable and not a backpropagation-friendly process. In this paper, we present a novel deep adaptation model based on aligning the non-negative sub-spaces derived from source and target domains. Removing the orthogonality constraints makes the model more stable for training. The proposed method is evaluated on a domain adaptation regression benchmark. Results show competitive performance compared to state-of-the-art models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Normalizing Flow appliqué aux problèmes de pré-image de noyau.\n \n \n \n \n\n\n \n Glédel, C.; Gaüzère, B.; and Honeine, P.\n\n\n \n\n\n\n In 24-ème Conférence d'Apprentissage automatique (CAp) - 24th annual meeting of the francophone Machine Learning community, Vannes, France, 5 - 8 July 2022. \n \n\n\n\n
\n\n\n\n \n \n \"Normalizing paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{22.cap,\n   author =  "Clément Glédel and Benoît Gaüzère and Paul Honeine",\n   title =  "Normalizing Flow appliqué aux problèmes de pré-image de noyau",\n   booktitle =  "24-ème Conférence d'Apprentissage automatique (CAp) - 24th annual meeting of the francophone Machine Learning community",\n   address =  "Vannes, France",\n   year =  "2022",\n   month =  "5 - 8~" # jul,\n   keywords =  "Normalizing Flow, Méthodes à noyaux, Pré-image, Reconnaissance des formes",\n   acronym =  "CAp",\n   url_paper  =  "http://honeine.fr/paul/publi/22.cap.pdf",\n   abstract = "Dans cet article, nous proposons une approche permettant la résolution du problème de pré-image rencontré en reconnaissance de formes et apprentissage statistique notamment les méthodes à noyaux. Pour cela, nous proposons d’utiliser les récents modèles génératifs appelés Normalizing Flows (NF) qui permettent de construire une distribution simple à partir d’une distribution complexe par une série de fonctions bijectives, bénéficiant ainsi d’une efficace génération de données grâce à son inversibilité. Nous proposons d’aligner l’espace généré par le NF sur l’espace de noyau, ce qui permet de résoudre le problème de pré-image grâce à la nature inversible du NF. Les performances de la méthode proposée sont validées sur le jeu de données MNIST, démontrant la capacité des NF à résoudre efficacement le problème de pré-image.",\n}\n\n\n\n
\n
\n\n\n
\n Dans cet article, nous proposons une approche permettant la résolution du problème de pré-image rencontré en reconnaissance de formes et apprentissage statistique notamment les méthodes à noyaux. Pour cela, nous proposons d’utiliser les récents modèles génératifs appelés Normalizing Flows (NF) qui permettent de construire une distribution simple à partir d’une distribution complexe par une série de fonctions bijectives, bénéficiant ainsi d’une efficace génération de données grâce à son inversibilité. Nous proposons d’aligner l’espace généré par le NF sur l’espace de noyau, ce qui permet de résoudre le problème de pré-image grâce à la nature inversible du NF. Les performances de la méthode proposée sont validées sur le jeu de données MNIST, démontrant la capacité des NF à résoudre efficacement le problème de pré-image.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Method for analyzing soil pollution.\n \n \n \n \n\n\n \n Exem, A. V.; Honeine, P.; and Mignot, M.\n\n\n \n\n\n\n 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Method link\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@patent{22.patent,\n  author  = {Antonin Van Exem and Paul Honeine and Mélanie Mignot},\n  title  = {Method for analyzing soil pollution},\n  year   = {2022},\n  howpublished  = {WO/2022/069827A1, FR3114653A1, FR3114653},\n  url_link  = "https://patentscope.wipo.int/search/en/detail.jsf?docId=FR356980836",\n  acronym =  "patent",\n  keywords  =  "pollution",\n  abstract = "L’invention présente un procédé d’analyse de la contamination de sols par des polluants, notamment organiques, par analyse hyperspectrale de la réflexion et/ou de la photoluminescence caractérisé en ce que ladite analyse est réalisée avec un premier équipement par l’éclairage d’un échantillon par une source lumineuse et par au moins un capteur spectral sensible sur un spectre allant de l’infrarouge thermique à l’ultraviolet.",\n}%"howpublished" = "number" in patent !\n\n
\n
\n\n\n
\n L’invention présente un procédé d’analyse de la contamination de sols par des polluants, notamment organiques, par analyse hyperspectrale de la réflexion et/ou de la photoluminescence caractérisé en ce que ladite analyse est réalisée avec un premier équipement par l’éclairage d’un échantillon par une source lumineuse et par au moins un capteur spectral sensible sur un spectre allant de l’infrarouge thermique à l’ultraviolet.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect of Prior-based Losses on Segmentation Performance: A Benchmark.\n \n \n \n \n\n\n \n El Jurdi, R.; Petitjean, C.; Cheplygina, V.; Honeine, P.; and Abdallah, F.\n\n\n \n\n\n\n Technical Report ArXiv, January 2022.\n \n\n\n\n
\n\n\n\n \n \n \"Effect link\n  \n \n \n \"Effect paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@TECHREPORT{22.effect,\n  title={Effect of Prior-based Losses on Segmentation Performance: A Benchmark},\n  author={Rosana {El Jurdi} and Caroline Petitjean and Veronika Cheplygina and Paul Honeine and Fahed Abdallah},\n  institution =  "ArXiv",\n  journal={arXiv preprint arXiv:2201.02428},\n  year={2022},\n  month = Jan,\n  url_link = {https://arxiv.org/abs/2201.02428},\n  url_paper = {https://arxiv.org/pdf/2201.02428.pdf},\n  keywords  =  "Prior-based loss functions, prior-constrained CNNs, medical image segmentation, deep learning",\n  abstract = "Today, deep convolutional neural networks (CNNs) have demonstrated state-of-the-art performance for medical image segmentation, on various imaging modalities and tasks. Despite early success, segmentation networks may still generate anatomically aberrant segmentations, with holes or inaccuracies near the object boundaries. To enforce anatomical plausibility, recent research studies have focused on incorporating expert knowledge also known as prior knowledge, such as object shapes or boundary, as constraints in the loss function. Prior integrated could be low-level referring to reformulated representations extracted from the ground-truth segmentations, or high-level representing external medical information such as the organ’s shape or size. Over the past few years, prior-based losses exhibited a rising interest in the research field since they allow integration of expert knowledge while still being architecture-agnostic. However, given the diversity of prior-based losses on different medical imaging challenges and tasks, it has become hard to identify what loss works best for which dataset. In this paper, we establish a benchmark of recent prior-based losses for medical image segmentation. The main objective is to provide intuition onto which losses to choose given a particular task or dataset, based on dataset characteristics and properties. To this end, four low-level and high-level prior-based losses are selected. The considered losses are validated on 8 different datasets from a variety of medical image segmentation challenges including the Decathlon, the ISLES and the WMH challenge. The proposed benchmark is conducted via a unified segmentation network and learning environment. The considered prior-based losses are varied in conjunction with the Dice loss across the different datasets. Results show that whereas low level prior-based losses can guarantee an increase in performance over the Dice loss baseline regardless of the dataset characteristics, high-level prior-based losses can increase anatomical plausibility as per data characteristic.",\n}\n\n
\n
\n\n\n
\n Today, deep convolutional neural networks (CNNs) have demonstrated state-of-the-art performance for medical image segmentation, on various imaging modalities and tasks. Despite early success, segmentation networks may still generate anatomically aberrant segmentations, with holes or inaccuracies near the object boundaries. To enforce anatomical plausibility, recent research studies have focused on incorporating expert knowledge also known as prior knowledge, such as object shapes or boundary, as constraints in the loss function. Prior integrated could be low-level referring to reformulated representations extracted from the ground-truth segmentations, or high-level representing external medical information such as the organ’s shape or size. Over the past few years, prior-based losses exhibited a rising interest in the research field since they allow integration of expert knowledge while still being architecture-agnostic. However, given the diversity of prior-based losses on different medical imaging challenges and tasks, it has become hard to identify what loss works best for which dataset. In this paper, we establish a benchmark of recent prior-based losses for medical image segmentation. The main objective is to provide intuition onto which losses to choose given a particular task or dataset, based on dataset characteristics and properties. To this end, four low-level and high-level prior-based losses are selected. The considered losses are validated on 8 different datasets from a variety of medical image segmentation challenges including the Decathlon, the ISLES and the WMH challenge. The proposed benchmark is conducted via a unified segmentation network and learning environment. The considered prior-based losses are varied in conjunction with the Dice loss across the different datasets. Results show that whereas low level prior-based losses can guarantee an increase in performance over the Dice loss baseline regardless of the dataset characteristics, high-level prior-based losses can increase anatomical plausibility as per data characteristic.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2021\n \n \n (12)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Symbols Detection and Classification using Graph Neural Networks.\n \n \n \n \n\n\n \n Renton, G.; Balcilar, M.; Héroux, P.; Gaüzère, B.; Honeine, P.; and Adam, S.\n\n\n \n\n\n\n Pattern Recognition Letters, 152: 391-397. December 2021.\n \n\n\n\n
\n\n\n\n \n \n \"Symbols paper\n  \n \n \n \"Symbols link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{21.gnn,\n   author =  "Guillaume Renton and Muhammet Balcilar and Pierre Héroux and Benoît Gaüzère and Paul Honeine and Sébastien Adam",\n   title =  {Symbols Detection and Classification using Graph Neural Networks},\n   journal = "Pattern Recognition Letters",\n   year  =  "2021",\n   month = dec,\n   volume = "152",\n   issn = "0167-8655",\n   pages = "391-397",\n   doi = {10.1016/j.patrec.2021.09.020},\n   url_paper   =  "http://honeine.fr/paul/publi/21.gnn.pdf",\n   url_link = {https://www.sciencedirect.com/science/article/pii/S0167865521003469},\n   keywords = {Graph Neural Network, Graphs, Floorplans},\n   abstract = {In this paper, we propose a method to both extract and classify symbols in floorplan images. This method relies on the very recent developments of Graph Neural Networks (GNN). In the proposed approach, floorplan images are first converted into Region Adjacency Graphs (RAGs). In order to achieve both classification and extraction, two different GNNs are used. The first one aims at classifying each node of the graph while the second targets the extraction of clusters corresponding to symbols. In both cases, the model is able to take into account edge features. Each model is firstly evaluated independently before combining both tasks simultaneously, increasing the quickness of the results.},\n}\n%   url_code  =  "...",\n\n
\n
\n\n\n
\n In this paper, we propose a method to both extract and classify symbols in floorplan images. This method relies on the very recent developments of Graph Neural Networks (GNN). In the proposed approach, floorplan images are first converted into Region Adjacency Graphs (RAGs). In order to achieve both classification and extraction, two different GNNs are used. The first one aims at classifying each node of the graph while the second targets the extraction of clusters corresponding to symbols. In both cases, the model is able to take into account edge features. Each model is firstly evaluated independently before combining both tasks simultaneously, increasing the quickness of the results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n CoordConv-Unet: Investigating CoordConv for Organ Segmentation.\n \n \n \n \n\n\n \n El Jurdi, R.; Petitjean, C.; Honeine, P.; and Abdallah, F.\n\n\n \n\n\n\n Innovation and Research in BioMedical engineering (IRBM), 42(6): 415-423. December 2021.\n \n\n\n\n
\n\n\n\n \n \n \"CoordConv-Unet: paper\n  \n \n \n \"CoordConv-Unet: link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{21.coordconv-unet,\n   title = {{CoordConv-Unet: Investigating CoordConv for Organ Segmentation}},\n   journal = {Innovation and Research in BioMedical engineering (IRBM)},\n   year = {2021},\n   volume = {42},\n   number = {6},\n   month = dec,\n   issn = {1959-0318},\n   pages = "415-423",\n   doi = {10.1016/j.irbm.2021.03.002},\n   url_paper   =  "https://normandie-univ.hal.science/hal-03410507",\n   url_link = {https://www.sciencedirect.com/science/article/pii/S1959031821000324},\n   author =  "Rosana {El Jurdi} and Caroline Petitjean and Paul Honeine and Fahed Abdallah",\n   keywords = {Medical image segmentation, Fully convolutional networks, Prior-based losses, CoordConv, MRI, CT},\n   abstract = {Objectives: \nConvolutional neural networks (CNNs) have established state-of-the-art performance in computer vision tasks such as object detection and segmentation. One of the major remaining challenges concerns their ability to capture consistent spatial and anatomically plausible attributes in medical image segmentation. To address this issue, many works advocate to integrate prior information at the level of the loss function. However, prior-based losses often suffer from local solutions and training instability. The CoordConv layers are extensions of convolutional neural network wherein convolution is conditioned on spatial coordinates. The objective of this paper is to investigate CoordConv as a proficient substitute to convolutional layers for medical image segmentation tasks when trained under prior-based losses.\nMethods: \nThis work introduces CoordConv-Unet which is a novel structure that can be used to accommodate training under anatomical prior losses. The proposed architecture demonstrates a dual role relative to prior constrained CNN learning: it either demonstrates a regularizing role that stabilizes learning while maintaining system performance, or improves system performance by allowing the learning to be more stable and to evade local minima.\nResults: \nTo validate the performance of the proposed model, experiments are conducted on two well-known public datasets from the Decathlon challenge: a mono-modal MRI dataset dedicated to segmentation of the left atrium, and a CT image dataset whose objective is to segment the spleen, an organ characterized with varying size and mild convexity issues.\nConclusion: \nResults show that, despite the inadequacy of CoordConv when trained with the regular dice baseline loss, the proposed CoordConv-Unet structure can improve significantly model performance when trained under anatomically constrained prior losses.},\n}\n\n\n\n
\n
\n\n\n
\n Objectives: Convolutional neural networks (CNNs) have established state-of-the-art performance in computer vision tasks such as object detection and segmentation. One of the major remaining challenges concerns their ability to capture consistent spatial and anatomically plausible attributes in medical image segmentation. To address this issue, many works advocate to integrate prior information at the level of the loss function. However, prior-based losses often suffer from local solutions and training instability. The CoordConv layers are extensions of convolutional neural network wherein convolution is conditioned on spatial coordinates. The objective of this paper is to investigate CoordConv as a proficient substitute to convolutional layers for medical image segmentation tasks when trained under prior-based losses. Methods: This work introduces CoordConv-Unet which is a novel structure that can be used to accommodate training under anatomical prior losses. The proposed architecture demonstrates a dual role relative to prior constrained CNN learning: it either demonstrates a regularizing role that stabilizes learning while maintaining system performance, or improves system performance by allowing the learning to be more stable and to evade local minima. Results: To validate the performance of the proposed model, experiments are conducted on two well-known public datasets from the Decathlon challenge: a mono-modal MRI dataset dedicated to segmentation of the left atrium, and a CT image dataset whose objective is to segment the spleen, an organ characterized with varying size and mild convexity issues. Conclusion: Results show that, despite the inadequacy of CoordConv when trained with the regular dice baseline loss, the proposed CoordConv-Unet structure can improve significantly model performance when trained under anatomically constrained prior losses.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High-level prior-based loss functions for medical image segmentation: A survey.\n \n \n \n \n\n\n \n El Jurdi, R.; Petitjean, C.; Honeine, P.; Cheplygina, V.; and Abdallah, F.\n\n\n \n\n\n\n Computer Vision and Image Understanding, 210: 103248. September 2021.\n \n\n\n\n
\n\n\n\n \n \n \"High-level link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{21.prior_survey,\n\tauthor = {Rosana {El Jurdi} and Caroline Petitjean and Paul Honeine and Veronika Cheplygina and Fahed Abdallah},\n\ttitle = {High-level prior-based loss functions for medical image segmentation: A survey},\n\tjournal = {Computer Vision and Image Understanding},\n\tissn = {1077-3142},\n\tpages = {103248},\n\tyear = {2021},\n\tmonth = sep,\n\tvolume = {210},\n\tdoi = {10.1016/j.cviu.2021.103248},\n\turl_link = {https://www.sciencedirect.com/science/article/pii/S1077314221000928},\n\tkeywords = {Prior-based loss functions, Anatomical constraint losses, Convolutional neural networks, Medical image segmentation, Deep learning},\n\tabstract = {Today, deep convolutional neural networks (CNNs) have demonstrated state of the art performance for supervised medical image segmentation, across various imaging modalities and tasks. Despite early success, segmentation networks may still generate anatomically aberrant segmentations, with holes or inaccuracies near the object boundaries. To mitigate this effect, recent research works have focused on incorporating spatial information or prior knowledge to enforce anatomically plausible segmentation. If the integration of prior knowledge in image segmentation is not a new topic in classical optimization approaches, it is today an increasing trend in CNN based image segmentation, as shown by the growing literature on the topic. In this survey, we focus on high level prior, embedded at the loss function level. We categorize the articles according to the nature of the prior: the object shape, size, topology, and the inter-regions constraints. We highlight strengths and limitations of current approaches, discuss the challenge related to the design and the integration of prior-based losses, and the optimization strategies, and draw future research directions.},\n}\n\n\n\n\n\n
\n
\n\n\n
\n Today, deep convolutional neural networks (CNNs) have demonstrated state of the art performance for supervised medical image segmentation, across various imaging modalities and tasks. Despite early success, segmentation networks may still generate anatomically aberrant segmentations, with holes or inaccuracies near the object boundaries. To mitigate this effect, recent research works have focused on incorporating spatial information or prior knowledge to enforce anatomically plausible segmentation. If the integration of prior knowledge in image segmentation is not a new topic in classical optimization approaches, it is today an increasing trend in CNN based image segmentation, as shown by the growing literature on the topic. In this survey, we focus on high level prior, embedded at the loss function level. We categorize the articles according to the nature of the prior: the object shape, size, topology, and the inter-regions constraints. We highlight strengths and limitations of current approaches, discuss the challenge related to the design and the integration of prior-based losses, and the optimization strategies, and draw future research directions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n graphkit-learn: A Python Library for Graph Kernels Based on Linear Patterns.\n \n \n \n \n\n\n \n Jia, L.; Gaüzère, B.; and Honeine, P.\n\n\n \n\n\n\n Pattern Recognition Letters, 143: 113-121. March 2021.\n \n\n\n\n
\n\n\n\n \n \n \"graphkit-learn: code\n  \n \n \n \"graphkit-learn: link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@article{21.graphkit-learn,\n   author =  "Linlin Jia and Benoît Gaüzère and Paul Honeine",\n   title =  {{graphkit-learn: A Python Library for Graph Kernels Based on Linear Patterns}},\n   journal = "Pattern Recognition Letters",\n   year  =  "2021",\n   volume = {143},\n   pages = {113-121},\n   month = mar,\n   issn = "0167-8655",\n   doi = "10.1016/j.patrec.2021.01.003",\n   url_code = "https://graphkit-learn.readthedocs.io/en/master/",\n   url_link = "http://www.sciencedirect.com/science/article/pii/S0167865521000131",\n   keywords  =  "Machine learning, Graph Kernels, Linear Patterns, Python Implementation",\n   abstract = "This paper presents graphkit-learn, the first Python library for efficient computation of graph kernels based on linear patterns, able to address various types of graphs. Graph kernels based on linear patterns are thoroughly implemented, each with specific computing methods, as well as two well-known graph kernels based on non-linear patterns for comparative analysis. Since computational complexity is an Achilles’ heel of graph kernels, we provide several strategies to address this critical issue, including parallelization, the trie data structure, and the FCSP method that we extend to other kernels and edge comparison. All proposed strategies save orders of magnitudes of computing time and memory usage. Moreover, all the graph kernels can be simply computed with a single Python statement, thus are appealing to researchers and practitioners. For the convenience of use, an advanced model selection procedure is provided for both regression and classification problems. Experiments on synthesized datasets and 11 real-world benchmark datasets show the relevance of the proposed library."\n}\n\n
\n
\n\n\n
\n This paper presents graphkit-learn, the first Python library for efficient computation of graph kernels based on linear patterns, able to address various types of graphs. Graph kernels based on linear patterns are thoroughly implemented, each with specific computing methods, as well as two well-known graph kernels based on non-linear patterns for comparative analysis. Since computational complexity is an Achilles’ heel of graph kernels, we provide several strategies to address this critical issue, including parallelization, the trie data structure, and the FCSP method that we extend to other kernels and edge comparison. All proposed strategies save orders of magnitudes of computing time and memory usage. Moreover, all the graph kernels can be simply computed with a single Python statement, thus are appealing to researchers and practitioners. For the convenience of use, an advanced model selection procedure is provided for both regression and classification problems. Experiments on synthesized datasets and 11 real-world benchmark datasets show the relevance of the proposed library.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Amélioration des performances des réseaux de neurones convolutifs en localisation indoor par augmentation des données.\n \n \n \n \n\n\n \n Daou, A.; Pothin, J.; Honeine, P.; and Bensrhair, A.\n\n\n \n\n\n\n In Actes de la 18-ème édition d'ORASIS (journées francophones des jeunes chercheurs en vision par ordinateur), Lac de Saint-Ferréol, France, 13 - 17 September 2021. \n \n\n\n\n
\n\n\n\n \n \n \"Amélioration paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{21.orasis,\n   author =  "Andrea Daou and Jean-Baptiste Pothin and Paul Honeine and Abdelaziz Bensrhair",\n   title =  "Amélioration des performances des réseaux de neurones convolutifs en localisation indoor par augmentation des données",\n   booktitle =  "Actes de la 18-ème édition d'ORASIS (journées francophones des jeunes chercheurs en vision par ordinateur)",\n   address =  "Lac de Saint-Ferréol, France",\n   year  =  "2021",\n   month =  "13 - 17~" # sep,\n   keywords  =  "machine learning, computer vision, deep learning",\n   acronym =  "ORASIS'21",\n   url_paper  =  "http://honeine.fr/paul/publi/21.orasis.pdf",\n   abstract = "Les réseaux de neurones convolutifs (CNN) offrent des performances remarquables dans la détection et la reconnaissance d'objets, en grande partie grâce aux jeux de données volumineux et de qualité existants. Ces performances se détériorent en localisation indoor à cause de la faible quantité de données disponibles. Une autre limite des CNN est leur robustesse réduite aux transformations géométriques comme la rotation et le changement d'échelle. Pour pallier ces défauts, nous analysons l'effet de l'augmentation des données sur les performances du système de classification, en ajoutant des images modifiées pour tenir compte des changements de point de vue représentatifs. Pour compenser les temps de calcul plus longs, nous utilisons un modèle CNN pré-entraîné et appliquons un apprentissage par transfert. Les résultats obtenus sur les jeux de données d'images de scènes MIT Indoor67 et Scene15 montrent l'intérêt de l'approche proposée. %\n--- %\nConvolutional Neural Networks (CNNs) offer remarkable performance in object detection and recognition tasks, mainly thanks to large-scale high-quality datasets. This performance deteriorates in indoor localization because of the small amount of data available. Another limitation of CNNs is their reduced robustness to geometric transformations, such as rotation and scaling. To overcome these shortcomings, we analyze the effect of data augmentation on the performance of the classification system by adding modified images to account for representative perspective changes. To compensate for the long computing delays, we use a pre-trained CNN model and apply transfer learning. The results obtained on the MIT Indoor67 and Scene15 datasets demonstrate the relevance of the proposed method.",\n}\n\n\n
\n
\n\n\n
\n Les réseaux de neurones convolutifs (CNN) offrent des performances remarquables dans la détection et la reconnaissance d'objets, en grande partie grâce aux jeux de données volumineux et de qualité existants. Ces performances se détériorent en localisation indoor à cause de la faible quantité de données disponibles. Une autre limite des CNN est leur robustesse réduite aux transformations géométriques comme la rotation et le changement d'échelle. Pour pallier ces défauts, nous analysons l'effet de l'augmentation des données sur les performances du système de classification, en ajoutant des images modifiées pour tenir compte des changements de point de vue représentatifs. Pour compenser les temps de calcul plus longs, nous utilisons un modèle CNN pré-entraîné et appliquons un apprentissage par transfert. Les résultats obtenus sur les jeux de données d'images de scènes MIT Indoor67 et Scene15 montrent l'intérêt de l'approche proposée. % — % Convolutional Neural Networks (CNNs) offer remarkable performance in object detection and recognition tasks, mainly thanks to large-scale high-quality datasets. This performance deteriorates in indoor localization because of the small amount of data available. Another limitation of CNNs is their reduced robustness to geometric transformations, such as rotation and scaling. To overcome these shortcomings, we analyze the effect of data augmentation on the performance of the classification system by adding modified images to account for representative perspective changes. To compensate for the long computing delays, we use a pre-trained CNN model and apply transfer learning. The results obtained on the MIT Indoor67 and Scene15 datasets demonstrate the relevance of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Breaking the Limits of Message Passing Graph Neural Networks.\n \n \n \n \n\n\n \n Balcilar, M.; Héroux, P.; Gaüzère, B.; Vasseur, P.; Adam, S.; and Honeine, P.\n\n\n \n\n\n\n In Meila, M.; and Zhang, T., editor(s), Proceedings of the 38th International Conference on Machine Learning (ICML), volume 139, of Proceedings of Machine Learning Research, pages 599–608, Vienna, Austria, 18 - 24 July 2021. PMLR\n \n\n\n\n
\n\n\n\n \n \n \"Breaking paper\n  \n \n \n \"Breaking link\n  \n \n \n \"Breaking slides\n  \n \n \n \"Breaking poster\n  \n \n \n \"Breaking video\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{21.icml.gnn,\n   title={Breaking the Limits of Message Passing Graph Neural Networks},\n   author={Muhammet Balcilar and Pierre H{\\'e}roux and Benoît Ga{\\"u}z{\\`e}re and Pascal Vasseur and S{\\'e}bastien Adam and Paul Honeine},\n   booktitle={Proceedings of the 38th International Conference on Machine Learning (ICML)},\n   pages = "599--608",\n   editor = \t"Meila, Marina and Zhang, Tong",\n   address =  "Vienna, Austria",\n   series = "Proceedings of Machine Learning Research",\n   publisher =    {PMLR},\n   volume = "139",\n   year  =  "2021",\n   month =  "18 - 24~" # jul,\n   acronym =  "ICML",\n   url_paper  =  "http://honeine.fr/paul/publi/21.icml.gnn.pdf",\n   url_link = "http://proceedings.mlr.press/v139/balcilar21a.html",\n   url_slides = "http://honeine.fr/paul/publi/21.icml.gnn_slides.pdf",\n   url_poster = "http://honeine.fr/paul/publi/21.icml.gnn_poster.pdf",\n   url_video =  "https://icml.cc/virtual/2021/spotlight/8578",\n   abstract = "Since the Message Passing (Graph) Neural Networks (MPNNs) have a linear complexity with respect to the number of nodes when applied to sparse graphs, they have been widely implemented and still raise a lot of interest even though their theoretical expressive power is limited to the first order Weisfeiler-Lehman test (1-WL). In this paper, we show that if the graph convolution supports are designed in spectral-domain by a non-linear custom function of eigenvalues and masked with an arbitrary large receptive field, the MPNN is theoretically more powerful than the 1-WL test, experimentally as powerful as a 3-WL existing model and is spatially localized. Moreover, by designing custom filter functions, outputs can have various frequency components that allow the convolution process to learn different relationships between given input graph signals and their associated properties.\nSo far, the best 3-WL equivalent graph neural networks have a computational complexity in $\\mathcal{O}(n^3)$ with memory usage in $\\mathcal{O}(n^2)$, consider non-local update mechanism and do not provide the spectral richness of output profile. The proposed method overcomes all these aforementioned problems and reaches state-of-the-art results in many downstream tasks.",\n}\n\n\n\n
\n
\n\n\n
\n Since the Message Passing (Graph) Neural Networks (MPNNs) have a linear complexity with respect to the number of nodes when applied to sparse graphs, they have been widely implemented and still raise a lot of interest even though their theoretical expressive power is limited to the first order Weisfeiler-Lehman test (1-WL). In this paper, we show that if the graph convolution supports are designed in spectral-domain by a non-linear custom function of eigenvalues and masked with an arbitrary large receptive field, the MPNN is theoretically more powerful than the 1-WL test, experimentally as powerful as a 3-WL existing model and is spatially localized. Moreover, by designing custom filter functions, outputs can have various frequency components that allow the convolution process to learn different relationships between given input graph signals and their associated properties. So far, the best 3-WL equivalent graph neural networks have a computational complexity in $\\mathcal{O}(n^3)$ with memory usage in $\\mathcal{O}(n^2)$, consider non-local update mechanism and do not provide the spectral richness of output profile. The proposed method overcomes all these aforementioned problems and reaches state-of-the-art results in many downstream tasks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Surprisingly Effective Perimeter-based Loss for Medical Image Segmentation.\n \n \n \n \n\n\n \n El Jurdi, R.; Petitjean, C.; Honeine, P.; Cheplygina, V.; and Abdallah, F.\n\n\n \n\n\n\n In Heinrich, M.; Dou, Q.; de Bruijne, M.; Lellmann, J.; Schläfer, A.; and Ernst, F., editor(s), Proceedings of the fourth conference on Medical Imaging with Deep Learning (MIDL), volume 143, of Proceedings of Machine Learning Research, pages 158–167, Lübeck, Germany, 7 - 9 July 2021. PMLR\n \n\n\n\n
\n\n\n\n \n \n \"A paper\n  \n \n \n \"A slides\n  \n \n \n \"A link\n  \n \n \n \"A poster\n  \n \n \n \"A video\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{21.midl,\n   author =  "Rosana {El Jurdi} and Caroline Petitjean and Paul Honeine and  Veronika Cheplygina and Fahed Abdallah",\n   title =  "A Surprisingly Effective Perimeter-based Loss for Medical Image Segmentation",\n   booktitle =  "Proceedings of the fourth conference on Medical Imaging with Deep Learning (MIDL)",\n   address =  "Lübeck, Germany",\n   pages = "158--167",\n   editor = \t"Heinrich, Mattias and Dou, Qi and de Bruijne, Marleen and Lellmann, Jan and Schläfer, Alexander and Ernst, Floris",\n   series = "Proceedings of Machine Learning Research",\n   publisher =    {PMLR},\n   volume = "143",\n   year  =  "2021",\n   month =  "7 - 9~" # jul,\n   acronym =  "MIDL",\n   url_paper  =  "http://honeine.fr/paul/publi/21.midl.paper.pdf",\n   url_slides = "http://honeine.fr/paul/publi/21.midl.poster.pdf",\n   url_link = "https://proceedings.mlr.press/v143/el-jurdi21a.html",\n   url_poster = "http://honeine.fr/paul/publi/21.midl.poster.pdf",\n   url_video = "https://www.youtube.com/watch?v=QYimu8cNjKs",\n   keywords = "Medical Image Segmentation, Convolutional Neural Networks, Prior-based Losses, Perimeter Constraint",\n   abstract = "Deep convolutional networks recently made many breakthroughs in medical image segmentation. Still, some anatomical artefacts may be observed in the segmentation results, with holes or inaccuracies near the object boundaries. To address these issues, loss functions that incorporate constraints, such as spatial information or prior knowledge, have been introduced. An example of such prior losses are the contour-based losses, which exploit distance maps to conduct point-by-point optimization between ground-truth and predicted contours. However, such losses may be computationally expensive or susceptible to local solutions and vanishing gradient problems. The problem becomes more challenging for organs with non-convex shapes or border irregularities. We propose a novel loss constraint that optimizes the perimeter of the segmented object relative to the ground-truth segmentation. The novelty lies in computing the perimeter with a soft approximation of the contour of the probability map as the difference between dilation and erosion of predicted segmentations carried out via specialized layers in the network. Moreover, instead of point-by-point minimization, we optimize the mean squared error between the sums of the overall ground-truth vs. predicted contours. This soft optimization of contour boundaries allows the network to take into consideration border irregularities within organs while still being efficient. Our experiments on three public datasets (spleen, hippocampus and cardiac structures) show that the proposed method outperforms state-of-the-art boundary losses for both single and multi-organ segmentation.",\n}\n\n\n\n
\n
\n\n\n
\n Deep convolutional networks recently made many breakthroughs in medical image segmentation. Still, some anatomical artefacts may be observed in the segmentation results, with holes or inaccuracies near the object boundaries. To address these issues, loss functions that incorporate constraints, such as spatial information or prior knowledge, have been introduced. An example of such prior losses are the contour-based losses, which exploit distance maps to conduct point-by-point optimization between ground-truth and predicted contours. However, such losses may be computationally expensive or susceptible to local solutions and vanishing gradient problems. The problem becomes more challenging for organs with non-convex shapes or border irregularities. We propose a novel loss constraint that optimizes the perimeter of the segmented object relative to the ground-truth segmentation. The novelty lies in computing the perimeter with a soft approximation of the contour of the probability map as the difference between dilation and erosion of predicted segmentations carried out via specialized layers in the network. Moreover, instead of point-by-point minimization, we optimize the mean squared error between the sums of the overall ground-truth vs. predicted contours. This soft optimization of contour boundaries allows the network to take into consideration border irregularities within organs while still being efficient. Our experiments on three public datasets (spleen, hippocampus and cardiac structures) show that the proposed method outperforms state-of-the-art boundary losses for both single and multi-organ segmentation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective.\n \n \n \n \n\n\n \n Balcilar, M.; Renton, G.; Héroux, P.; Gaüzère, B.; Adam, S.; and Honeine, P.\n\n\n \n\n\n\n In International Conference on Learning Representations (ICLR), Vienna, Austria, 4 May 2021. \n \n\n\n\n
\n\n\n\n \n \n \"Analyzing code\n  \n \n \n \"Analyzing paper\n  \n \n \n \"Analyzing link\n  \n \n \n \"Analyzing poster\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{21.iclr.gnn,\n   title={Analyzing the Expressive Power of Graph Neural Networks in a Spectral Perspective},\n   author={Muhammet Balcilar and Guillaume Renton and Pierre H{\\'e}roux and Benoît Ga{\\"u}z{\\`e}re and S{\\'e}bastien Adam and Paul Honeine},\n   booktitle={International Conference on Learning Representations (ICLR)},\n  address =  "Vienna, Austria",\n   year  =  "2021",\n   month =  "4~" # may,\n   acronym =  "ICLR",\n   url_code  =  "https://github.com/balcilar/gnn-spectral-expressive-power",\n   url_paper  =  "http://honeine.fr/paul/publi/21.iclr.gnn.pdf",\n   url_link = "https://openreview.net/forum?id=-qh0M9XWxnv",\n   url_poster  =  "http://honeine.fr/paul/publi/21.iclr.gnn-poster.pdf",\n   abstract = "In the recent literature of Graph Neural Networks (GNN), the expressive power of models has been studied through their capability to distinguish if two given graphs are isomorphic or not. Since the graph isomorphism problem is NP-intermediate, and Weisfeiler-Lehman (WL) test  can give sufficient but not enough evidence in polynomial time, the theoretical power of GNNs is usually evaluated by the equivalence of WL-test order, followed by an empirical analysis of the models on some reference inductive and transductive datasets. However, such analysis does not account the signal processing pipeline, whose capability is generally evaluated in the spectral domain. In this paper, we argue that a spectral analysis of GNNs behavior can provide a complementary point of view to go one step further in the understanding of GNNs. By bridging the gap between the spectral and spatial design of graph convolutions, we theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. Using this connection, we managed to re-formulate most of the state-of-the-art graph neural networks into one common framework. This general framework allows to lead a spectral analysis of the most popular GNNs, explaining their performance and showing their limits according to spectral point of view. Our theoretical spectral analysis is confirmed by experiments on various graph databases. Furthermore, we demonstrate the necessity of high and/or band-pass filters on a graph dataset, while the majority of GNN is limited to only low-pass and inevitably it fails.",\n}\n\n\n
\n
\n\n\n
\n In the recent literature of Graph Neural Networks (GNN), the expressive power of models has been studied through their capability to distinguish if two given graphs are isomorphic or not. Since the graph isomorphism problem is NP-intermediate, and Weisfeiler-Lehman (WL) test can give sufficient but not enough evidence in polynomial time, the theoretical power of GNNs is usually evaluated by the equivalence of WL-test order, followed by an empirical analysis of the models on some reference inductive and transductive datasets. However, such analysis does not account the signal processing pipeline, whose capability is generally evaluated in the spectral domain. In this paper, we argue that a spectral analysis of GNNs behavior can provide a complementary point of view to go one step further in the understanding of GNNs. By bridging the gap between the spectral and spatial design of graph convolutions, we theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. Using this connection, we managed to re-formulate most of the state-of-the-art graph neural networks into one common framework. This general framework allows to lead a spectral analysis of the most popular GNNs, explaining their performance and showing their limits according to spectral point of view. Our theoretical spectral analysis is confirmed by experiments on various graph databases. Furthermore, we demonstrate the necessity of high and/or band-pass filters on a graph dataset, while the majority of GNN is limited to only low-pass and inevitably it fails.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Graph Pre-image Method Based on Graph Edit Distances.\n \n \n \n \n\n\n \n Jia, L.; Gaüzère, B.; and Honeine, P.\n\n\n \n\n\n\n In Torsello, A.; Rossi, L.; Pelillo, M.; Biggio, B.; and Robles-Kelly, A., editor(s), Proceedings of the IAPR Joint International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (S+SSPR), pages 216–226, Venice, Italy, 21 - 22 January 2021. Springer International Publishing\n \n\n\n\n
\n\n\n\n \n \n \"A paper\n  \n \n \n \"A slides\n  \n \n \n \"A video\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{21.s+sspr.graphpreimage,\n   author =  "Linlin Jia and Benoît Gaüzère and Paul Honeine",\n   title =  "A Graph Pre-image Method Based on Graph Edit Distances",\n   booktitle =  "Proceedings of the IAPR Joint International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (S+SSPR)",\n   editor="Torsello, Andrea and Rossi, Luca and Pelillo, Marcello and Biggio, Battista and Robles-Kelly, Antonio",   \n   address =  "Venice, Italy",\n   year  =  "2021",\n   month =  "21 - 22~" # jan,\n   keywords  =  "machine learning, deep learning",\n   acronym =  "S+SSPR",\n   publisher="Springer International Publishing",\n   pages="216--226",\n   doi = "10.1007/978-3-030-73973-7_21",\n   url_paper  =  "http://honeine.fr/paul/publi/21.s+sspr.graphpreimage.pdf",\n   url_slides= "http://honeine.fr/paul/publi/21.s+sspr.graphpreimage.slides.pdf",\n   url_video="https://youtu.be/GnzBXM3L3pQ",\n   abstract = "The pre-image problem for graphs is increasingly attracting attention owing to many promising applications. However, it is a challenging problem due to the complexity of graph structure. In this paper, we propose a novel method to construct graph pre-images as median graphs, by aligning graph edit distances (GEDs) in the graph space with distances in the graph kernel space. The two metrics are aligned by optimizing the edit costs of GEDs according to the distances between the graphs within the space associated with a particular graph kernel. Then, the graph pre-image can be estimated using a median graph method founded on the GED. In particular, a recently introduced method to compute generalized median graphs with iterative alternate minimizations is revisited for this purpose. Conducted experiments show very promising results while opening the computation of graph pre-image to any graph kernel and to graphs with non-symbolic attributes.",\n   isbn="978-3-030-73973-7",\n}\n\n
\n
\n\n\n
\n The pre-image problem for graphs is increasingly attracting attention owing to many promising applications. However, it is a challenging problem due to the complexity of graph structure. In this paper, we propose a novel method to construct graph pre-images as median graphs, by aligning graph edit distances (GEDs) in the graph space with distances in the graph kernel space. The two metrics are aligned by optimizing the edit costs of GEDs according to the distances between the graphs within the space associated with a particular graph kernel. Then, the graph pre-image can be estimated using a median graph method founded on the GED. In particular, a recently introduced method to compute generalized median graphs with iterative alternate minimizations is revisited for this purpose. Conducted experiments show very promising results while opening the computation of graph pre-image to any graph kernel and to graphs with non-symbolic attributes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Metric Learning Approach to Graph Edit Costs for Regression.\n \n \n \n \n\n\n \n Jia, L.; Gaüzère, B.; Yger, F.; and Honeine, P.\n\n\n \n\n\n\n In Torsello, A.; Rossi, L.; Pelillo, M.; Biggio, B.; and Robles-Kelly, A., editor(s), Proceedings of the IAPR Joint International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (S+SSPR), pages 238–247, Venice, Italy, 21 - 22 January 2021. Springer International Publishing\n \n\n\n\n
\n\n\n\n \n \n \"A paper\n  \n \n \n \"A slides\n  \n \n \n \"A video\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{21.s+sspr.graphregression,\n   author =  "Linlin Jia and Benoît Gaüzère and Florian Yger and Paul Honeine",\n   title =  "A Metric Learning Approach to Graph Edit Costs for Regression",\n   booktitle =  "Proceedings of the IAPR Joint International Workshops on Statistical Techniques in Pattern Recognition (SPR) and Structural and Syntactic Pattern Recognition (S+SSPR)",\n   editor="Torsello, Andrea and Rossi, Luca and Pelillo, Marcello and Biggio, Battista and Robles-Kelly, Antonio",   \n   address =  "Venice, Italy",\n   year  =  "2021",\n   month =  "21 - 22~" # jan,\n   keywords  =  "machine learning, deep learning",\n   acronym =  "S+SSPR",\n   publisher="Springer International Publishing",\n   pages="238--247",\n   doi = "10.1007/978-3-030-73973-7_21",\n   url_paper  =  "http://honeine.fr/paul/publi/21.s+sspr.graphregression.pdf",\n   url_slides="http://honeine.fr/paul/publi/21.s+sspr.graphregression.slides.pdf",\n   url_video="https://youtu.be/-NGWQYLFa6k",\n   abstract = "Graph edit distance (GED) is a widely used dissimilarity measure between graphs. It is a natural metric for comparing graphs and respects the nature of the underlying space, and provides interpretability for operations on graphs. As a key ingredient of the GED, the choice of edit cost functions has a dramatic effect on the GED and therefore the classification or regression performances. In this paper, in the spirit of metric learning, we propose a strategy to optimize edit costs according to a particular prediction task, which avoids the use of predefined costs. An alternate iterative procedure is proposed to preserve the distances in both the underlying spaces, where the update on edit costs obtained by solving a constrained linear problem and a re-computation of the optimal edit paths according to the newly computed costs are performed alternately. Experiments show that regression using the optimized costs yields better performances compared to random or expert costs.",\n   isbn="978-3-030-73973-7",\n}\n\n\n\n
\n
\n\n\n
\n Graph edit distance (GED) is a widely used dissimilarity measure between graphs. It is a natural metric for comparing graphs and respects the nature of the underlying space, and provides interpretability for operations on graphs. As a key ingredient of the GED, the choice of edit cost functions has a dramatic effect on the GED and therefore the classification or regression performances. In this paper, in the spirit of metric learning, we propose a strategy to optimize edit costs according to a particular prediction task, which avoids the use of predefined costs. An alternate iterative procedure is proposed to preserve the distances in both the underlying spaces, where the update on edit costs obtained by solving a constrained linear problem and a re-computation of the optimal edit paths according to the newly computed costs are performed alternately. Experiments show that regression using the optimized costs yields better performances compared to random or expert costs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral imaging for the evaluation of lithology and the monitoring of hydrocarbons in environmental samples.\n \n \n \n \n\n\n \n Dhaini, M.; Roudaut, F.; Garret, A.; Arzur, R.; Chereau, A.; Varenne, F.; Honeine, P.; Mignot, M.; and Exem, A. V.\n\n\n \n\n\n\n In RemTech (International event on Remediation, Coasts, Floods, Climate, Seismic, Regeneration Industry), Ferrara, Italy, 20 - 24 September 2021. \n \n\n\n\n
\n\n\n\n \n \n \"Hyperspectral video\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@INPROCEEDINGS{21.remtech,\n  title={Hyperspectral imaging for the evaluation of lithology and the monitoring of hydrocarbons in environmental samples},\n  author="Mohamad Dhaini and François-Joseph Roudaut and Antonin Garret and Ronan Arzur and Audrey Chereau and Fanny Varenne and Paul Honeine and Mélanie Mignot and Antonin Van Exem",\n  booktitle={RemTech (International event on Remediation, Coasts, Floods, Climate, Seismic, Regeneration Industry)},\n   address =  "Ferrara, Italy",\n   year =  "2021",\n   month =  "20 - 24~" # sep,\n      url_video =  "https://www.linkedin.com/posts/marcofalconi_approach-characterization-hydrocarbons-activity-6864322911335981056-uBmv",\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n The Omniscape Dataset.\n \n \n \n\n\n \n Sekkat, A. R.; Dupuis, Y.; Honeine, P.; and Vasseur, P.\n\n\n \n\n\n\n Protected Database (Agence pour la Protection des Programmes), IDDN.FR.001.410001.000.S.P.2021.000.10300, October 2021.\n (Right Holder: Université de Rouen Normandie)\n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@misc{21.Omniscape,\n  author       = "Ahmed Rida Sekkat and Yohan Dupuis and Paul Honeine and Pascal Vasseur",\n  title        = "The Omniscape Dataset",\n  howpublished = "Protected Database (Agence pour la Protection des Programmes), IDDN.FR.001.410001.000.S.P.2021.000.10300",\n  month        = oct,\n  year         = "2021",\n  note         = "(Right Holder: Université de Rouen Normandie)",\n  annote       = ""\n}\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2020\n \n \n (13)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n BB-UNet: U-Net with Bounding Box Prior.\n \n \n \n \n\n\n \n El Jurdi, R.; Petitjean, C.; Honeine, P.; and Abdallah, F.\n\n\n \n\n\n\n IEEE Journal of Selected Topics in Signal Processing, 14(6): 1189-1198. October 2020.\n \n\n\n\n
\n\n\n\n \n \n \"BB-UNet: paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{20.bb_unet,\n   author =  "Rosana {El Jurdi} and Caroline Petitjean and Paul Honeine and Fahed Abdallah",\n   title =  {{BB-UNet: U-Net with Bounding Box Prior}},\n   journal =  "IEEE Journal of Selected Topics in Signal Processing",\n   year  =  "2020",\n   volume={14},\n   number={6},\n   pages={1189-1198},\n   ISSN={1941-0484}, \n   month=oct,\n   doi = "10.1109/JSTSP.2020.3001502",\n   url_paper   =  "http://honeine.fr/paul/publi/20.BB_Unet.pdf",\n    keywords={U-Net; shape prior; location prior; attention maps; weakly supervised segmentation; deep learning},\n   abstract={Medical image segmentation is the process of anatomically isolating organs for analysis and treatment. Leading works within this domain emerged with the well-known U-Net. Despite its success, recent works have shown the limitations of U-Net to conduct segmentation given image particularities such as noise, corruption or lack of contrast. Prior knowledge integration allows to overcome segmentation ambiguities. This paper introduces BB-UNet (Bounding Box U-Net), a deep learning model that integrates location as well as shape prior onto model training. The proposed model is inspired by U-Net and incorporates priors through a novel convolutional layer introduced at the level of skip connections. The proposed architecture helps in presenting attention kernels onto the neural training in order to guide the model on where to look for the organs. Moreover, it fine-tunes the encoder layers based on positional constraints. The proposed model is exploited within two main paradigms: as a solo model given a fully supervised framework and as an ancillary model, in a weakly supervised setting. In the current experiments, manual bounding boxes are fed at inference and as such BB-Unet is exploited in a semi-automatic setting; however, BB-Unet has the potential of being part of a fully automated process, if it relies on a preliminary step of object detection. \nTo validate the performance of the proposed model, experiments are conducted on two public datasets: the SegTHOR dataset which focuses on the segmentation of thoracic organs at risk in computed tomography (CT) images, and the Cardiac dataset which is a mono-modal MRI dataset released as part of the Decathlon challenge and dedicated to segmentation of the left atrium. Results show that the proposed method outperforms state-of-the-art methods in fully supervised learning frameworks and registers relevant results given the weakly supervised domain.}, \n}%Date of Publication: 10 June 2020\n\n\n
\n
\n\n\n
\n Medical image segmentation is the process of anatomically isolating organs for analysis and treatment. Leading works within this domain emerged with the well-known U-Net. Despite its success, recent works have shown the limitations of U-Net to conduct segmentation given image particularities such as noise, corruption or lack of contrast. Prior knowledge integration allows to overcome segmentation ambiguities. This paper introduces BB-UNet (Bounding Box U-Net), a deep learning model that integrates location as well as shape prior onto model training. The proposed model is inspired by U-Net and incorporates priors through a novel convolutional layer introduced at the level of skip connections. The proposed architecture helps in presenting attention kernels onto the neural training in order to guide the model on where to look for the organs. Moreover, it fine-tunes the encoder layers based on positional constraints. The proposed model is exploited within two main paradigms: as a solo model given a fully supervised framework and as an ancillary model, in a weakly supervised setting. In the current experiments, manual bounding boxes are fed at inference and as such BB-Unet is exploited in a semi-automatic setting; however, BB-Unet has the potential of being part of a fully automated process, if it relies on a preliminary step of object detection. To validate the performance of the proposed model, experiments are conducted on two public datasets: the SegTHOR dataset which focuses on the segmentation of thoracic organs at risk in computed tomography (CT) images, and the Cardiac dataset which is a mono-modal MRI dataset released as part of the Decathlon challenge and dedicated to segmentation of the left atrium. Results show that the proposed method outperforms state-of-the-art methods in fully supervised learning frameworks and registers relevant results given the weakly supervised domain.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interpretable time series kernel analytics by pre-image estimation.\n \n \n \n \n\n\n \n Tran Thi Phuong, T.; Douzal, A.; Yazdi, S. V.; Honeine, P.; and Gallinari, P.\n\n\n \n\n\n\n Artificial Intelligence, 286: 103342. September 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Interpretable link\n  \n \n \n \"Interpretable paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{20.ts_preimage,\n   author =  "Thao {Tran Thi Phuong} and Ahlame Douzal and Saeed Varasteh Yazdi and Paul Honeine and Patrick Gallinari",\n   title =  {Interpretable time series kernel analytics by pre-image estimation},\n   journal =  "Artificial Intelligence",\n   pages = "103342",\n   year = "2020",\n   issn = "0004-3702",\n   volume={286},\n   month=sep,\n   doi = "10.1016/j.artint.2020.103342",\n   url_link = "http://www.sciencedirect.com/science/article/pii/S0004370220300989",\n   url_paper   =  "http://honeine.fr/paul/publi/20.ts_preimage.pdf",\n    keywords={Pre-image problem; Time series; kernel machinery; Time series averaging; kernel PCA; Dictionary Learning; Representation learning},\n   abstract={Kernel methods are known to be effective to analyse complex objects by implicitly embedding them into some feature space. To interpret and analyse the obtained results, it is often required to restore in the input space the results obtained in the feature space, by using pre-image estimation methods. This work proposes a new closed-form pre-image estimation method for time series kernel analytics that consists of two steps. In the first step, a time warp function, driven by distance constraints in the feature space, is defined to embed time series in a metric space where analytics can be performed conveniently. In the second step, the time series pre-image estimation is cast as learning a linear (or a nonlinear) transformation that ensures a local isometry between the time series embedding space and the feature space. The proposed method is compared to the state of the art through three major tasks that require pre-image estimation: 1) time series averaging, 2) time series reconstruction and denoising and 3) time series representation learning. The extensive experiments conducted on 33 publicly-available datasets show the benefits of the pre-image estimation for time series kernel analytics.},\n }   \n\n
\n
\n\n\n
\n Kernel methods are known to be effective to analyse complex objects by implicitly embedding them into some feature space. To interpret and analyse the obtained results, it is often required to restore in the input space the results obtained in the feature space, by using pre-image estimation methods. This work proposes a new closed-form pre-image estimation method for time series kernel analytics that consists of two steps. In the first step, a time warp function, driven by distance constraints in the feature space, is defined to embed time series in a metric space where analytics can be performed conveniently. In the second step, the time series pre-image estimation is cast as learning a linear (or a nonlinear) transformation that ensures a local isometry between the time series embedding space and the feature space. The proposed method is compared to the state of the art through three major tasks that require pre-image estimation: 1) time series averaging, 2) time series reconstruction and denoising and 3) time series representation learning. The extensive experiments conducted on 33 publicly-available datasets show the benefits of the pre-image estimation for time series kernel analytics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n SimilCatch: Enhanced Social Spammers Detection on Twitter using Markov Random Fields.\n \n \n \n \n\n\n \n El-Mawass, N.; Honeine, P.; and Vercouter, L.\n\n\n \n\n\n\n Information Processing and Management, 57(6): 102317. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"SimilCatch: link\n  \n \n \n \"SimilCatch: paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{20.mrf,\n   author =  "Nour El-Mawass and Paul Honeine and Laurent Vercouter",\n   title =  {{SimilCatch}: Enhanced Social Spammers Detection on {T}witter using {M}arkov Random Fields},\n   journal =  "Information Processing and Management",\n   volume = "57",\n   number = "6",\n   pages = "102317",\n   year = "2020",\n   issn = "0306-4573",\n   doi = "10.1016/j.ipm.2020.102317",\n   url_link = "http://www.sciencedirect.com/science/article/pii/S0306457320308128",\n   url_paper   =  "http://honeine.fr/paul/publi/20.mrf.pdf",\n   keywords={Social Spam detection; Online Social Networks; Twitter; Supervised Learning; Markov Random Field; Cybersecurity},\n   abstract={The problem of social spam detection has been traditionally modeled as a supervised classification problem. Despite the initial success of this detection approach, later analysis of proposed systems and detection features has shown that, like email spam, the dynamic and adversarial nature of social spam makes the performance achieved by supervised systems hard to maintain. In this paper, we investigate the possibility of using the output of previously proposed supervised classification systems as a tool for spammers discovery. The hypothesis is that these systems are still highly capable of detecting spammers reliably even when their recall is far from perfect. We then propose to use the output of these classifiers as prior beliefs in a probabilistic graphical model framework. This framework allows beliefs to be propagated to similar social accounts. Basing similarity on a who-connects-to-whom network has been empirically critiqued in recent literature and we propose here an alternative definition based on a bipartite users-content interaction graph. For evaluation, we build a Markov Random Field on a graph of similar users and compute prior beliefs using a selection of state-of- the-art classifiers. We apply Loopy Belief Propagation to obtain posterior predictions on users. The proposed system is evaluated on a recent Twitter dataset that we collected and manually labeled. Classification results show a significant increase in recall and a maintained precision. This validates that formulating the detection problem with an undirected graphical model framework permits to restore the deteriorated performances of previously proposed statistical classifiers and to effectively mitigate the effect of spam evolution.},\n } % Available online 29 June 2020.\n\n\n\n\n
\n
\n\n\n
\n The problem of social spam detection has been traditionally modeled as a supervised classification problem. Despite the initial success of this detection approach, later analysis of proposed systems and detection features has shown that, like email spam, the dynamic and adversarial nature of social spam makes the performance achieved by supervised systems hard to maintain. In this paper, we investigate the possibility of using the output of previously proposed supervised classification systems as a tool for spammers discovery. The hypothesis is that these systems are still highly capable of detecting spammers reliably even when their recall is far from perfect. We then propose to use the output of these classifiers as prior beliefs in a probabilistic graphical model framework. This framework allows beliefs to be propagated to similar social accounts. Basing similarity on a who-connects-to-whom network has been empirically critiqued in recent literature and we propose here an alternative definition based on a bipartite users-content interaction graph. For evaluation, we build a Markov Random Field on a graph of similar users and compute prior beliefs using a selection of state-of- the-art classifiers. We apply Loopy Belief Propagation to obtain posterior predictions on users. The proposed system is evaluated on a recent Twitter dataset that we collected and manually labeled. Classification results show a significant increase in recall and a maintained precision. This validates that formulating the detection problem with an undirected graphical model framework permits to restore the deteriorated performances of previously proposed statistical classifiers and to effectively mitigate the effect of spam evolution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fusion of Multiple Mobility and Observation Models for Indoor Zoning-based Sensor Tracking.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; Honeine, P.; and Chkeir, A.\n\n\n \n\n\n\n IEEE Transactions on Aerospace and Electronic Systems, 56(6): 4315-4326. December 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Fusion paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{20.taes_fusion,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine and Aly Chkeir",\n   title =  {Fusion of Multiple Mobility and Observation Models for Indoor Zoning-based Sensor Tracking},\n   journal =  "IEEE Transactions on Aerospace and Electronic Systems",\n   year  =  "2020",\n   volume={56},\n   number={6},\n   pages={4315-4326},\n   ISSN={0018-9251}, \n   month=dec,\n   doi = "10.1109/TAES.2020.2988837",\n   url_paper   =  "http://honeine.fr/paul/publi/20.taes_fusion.pdf",\n    keywords={Hidden Markov models; Target tracking; Mathematical model; Estimation; Acceleration; Adaptation models; Belief functions; Fusion of evidence; Hidden Markov models; Mobility; Tracking},\n   abstract={In this paper, we propose a novel zoning-based tracking technique that combines the sensors' mobility with a WiFi-based observation model in the belief functions framework to track the sensors in real time. The next possible destinations of the sensors are predicted, leading to a mobility model. The belief functions framework is used to propagate the previous step evidence till the current one. The mobility of the sensors, along with information from the network, are used to obtain an accurate estimation of their position. The contributions of this paper are two-fold. Firstly, it proposes new mobility models based on the transition between zones and hidden Markov models, to generate evidence on the zones of the sensors without the use of inertial measurement units. Secondly, it explores the fusion of evidence generated by the mobility models on one hand, and the observation model on the other hand. The efficiency of the proposed method is demonstrated through experiments conducted on real data in two experimental scenarios.}, \n}\n\n\n\n
\n
\n\n\n
\n In this paper, we propose a novel zoning-based tracking technique that combines the sensors' mobility with a WiFi-based observation model in the belief functions framework to track the sensors in real time. The next possible destinations of the sensors are predicted, leading to a mobility model. The belief functions framework is used to propagate the previous step evidence till the current one. The mobility of the sensors, along with information from the network, are used to obtain an accurate estimation of their position. The contributions of this paper are two-fold. Firstly, it proposes new mobility models based on the transition between zones and hidden Markov models, to generate evidence on the zones of the sensors without the use of inertial measurement units. Secondly, it explores the fusion of evidence generated by the mobility models on one hand, and the observation model on the other hand. The efficiency of the proposed method is demonstrated through experiments conducted on real data in two experimental scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Incoherent Dictionary Learning via Mixed-integer Programming and Hybrid Augmented Lagrangian.\n \n \n \n \n\n\n \n Liu, Y.; Canu, S.; Honeine, P.; and Ruan, S.\n\n\n \n\n\n\n Digital Signal Processing, 101: 102703. 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Incoherent code\n  \n \n \n \"Incoherent paper\n  \n \n \n \"Incoherent link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{20.miqp,\n   author =  "Yuan Liu and Stéphane Canu and Paul Honeine and Su Ruan",\n   title =  "Incoherent Dictionary Learning via Mixed-integer Programming and Hybrid Augmented {L}agrangian",\n   journal =  "Digital Signal Processing",\n   volume = "101",\n   pages = "102703",\n   year = "2020",\n   issn = "1051-2004",\n   url_code  =  "http://www.honeine.fr/paul/publi/codeMIQP.zip",\n   url_paper   =  "http://honeine.fr/paul/publi/20.miqp.pdf",\n   doi = "10.1016/j.dsp.2020.102703",\n   url_link =  "https://www.sciencedirect.com/science/article/abs/pii/S1051200420300488",\n   keywords={Dictionaries; Matching pursuit algorithms; Image coding; Task analysis; Machine learning; Encoding; Image denoising; Mixed-integer quadratic programming; sparse representation; sparse coding; dictionary learning; image denoising; K-SVD; Incoherent dictionary; Coherence measure; Augmented Lagrangian Method; Alternating Proximal Method; Dictionary learning; Mixed-integer quadratic programming (MIQP)},\n   abstract={During the past decade, the dictionary learning has been a hot topic in sparse representation. With theoretical guarantees, a low-coherence dictionary is demonstrated to optimize the sparsity and improve the accuracy of the performance of signal reconstruction. Two strategies have been investigated to learn incoherent dictionaries: (i) by adding a decorrelation step after the dictionary updating (e.g. INK-SVD), or (ii) by introducing an additive penalty term of the mutual coherence to the general dictionary learning problem. In this paper, we propose a third method, which learns an incoherent dictionary by solving a constrained quadratic programming problem. Therefore, we can learn a dictionary with a prior fixed coherence value, which cannot be realized by the second strategy. Moreover, it updates the dictionary by considering simultaneously the reconstruction error and the incoherence, and thus does not suffer from the performance reduction of the first strategy.\nThe constrained quadratic programming problem is difficult problem due to its non-smoothness and non-convexity. To deal with the problem, a two-step alternating method is used: sparse coding by solving a problem of mixed-integer programming and dictionary updating by the hybrid method of augmented Lagrangian and alternating proximal linearized minimization. Finally, extensive experiments conducted in image denoising demonstrate the relevance of the proposed method, and illustrate the relation between coherence of dictionary and reconstruction quality.}, \n}\n\n\n\n
\n
\n\n\n
\n During the past decade, the dictionary learning has been a hot topic in sparse representation. With theoretical guarantees, a low-coherence dictionary is demonstrated to optimize the sparsity and improve the accuracy of the performance of signal reconstruction. Two strategies have been investigated to learn incoherent dictionaries: (i) by adding a decorrelation step after the dictionary updating (e.g. INK-SVD), or (ii) by introducing an additive penalty term of the mutual coherence to the general dictionary learning problem. In this paper, we propose a third method, which learns an incoherent dictionary by solving a constrained quadratic programming problem. Therefore, we can learn a dictionary with a prior fixed coherence value, which cannot be realized by the second strategy. Moreover, it updates the dictionary by considering simultaneously the reconstruction error and the incoherence, and thus does not suffer from the performance reduction of the first strategy. The constrained quadratic programming problem is difficult problem due to its non-smoothness and non-convexity. To deal with the problem, a two-step alternating method is used: sparse coding by solving a problem of mixed-integer programming and dictionary updating by the hybrid method of augmented Lagrangian and alternating proximal linearized minimization. Finally, extensive experiments conducted in image denoising demonstrate the relevance of the proposed method, and illustrate the relation between coherence of dictionary and reconstruction quality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Evidential Framework for Localization of Sensors in Indoor Environments.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; Honeine, P.; and Chkeir, A.\n\n\n \n\n\n\n Sensors, 20(1): 318. January 2020.\n \n\n\n\n
\n\n\n\n \n \n \"An paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{20.wsn_belief,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine and Aly Chkeir",\n   title =  {An Evidential Framework for Localization of Sensors in Indoor Environments},\n   journal =  "Sensors",\n   year  =  "2020",\n   volume={20},\n   number={1},\n   pages={318},\n   ISSN={1424-8220}, \n   month=Jan,\n   doi = "10.3390/s20010318",\n   url_paper   =  "http://honeine.fr/paul/publi/20.wsn_belief.pdf",\n   keywords={Decision-making; Evidence fusion; Localization; WiFi RSSI}, \n   abstract={Indoor localization has several applications ranging from people tracking and indoor navigation, to autonomous robot navigation and asset tracking. We tackle the problem as a zoning localization where the objective is to determine the zone where the mobile sensor resides at any instant. The decision-making process in localization systems relies on data coming from multiple sensors. The data retrieved from these sensors require robust fusion approaches to be processed. One of these approaches is the belief functions theory (BFT), also called the Dempster–Shafer theory. This theory deals with uncertainty and imprecision with a theoretically attractive evidential reasoning framework. This paper investigates the usage of the BFT to define an evidence framework for estimating the most probable sensor’s zone. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.}, \n}\n\n
\n
\n\n\n
\n Indoor localization has several applications ranging from people tracking and indoor navigation, to autonomous robot navigation and asset tracking. We tackle the problem as a zoning localization where the objective is to determine the zone where the mobile sensor resides at any instant. The decision-making process in localization systems relies on data coming from multiple sensors. The data retrieved from these sensors require robust fusion approaches to be processed. One of these approaches is the belief functions theory (BFT), also called the Dempster–Shafer theory. This theory deals with uncertainty and imprecision with a theoretically attractive evidential reasoning framework. This paper investigates the usage of the BFT to define an evidence framework for estimating the most probable sensor’s zone. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Investigating CoordConv for Fully and Weakly Supervised Medical Image Segmentation.\n \n \n \n \n\n\n \n El Jurdi, R.; Dargent, T.; Petitjean, C.; Honeine, P.; and Abdallah, F.\n\n\n \n\n\n\n In Proceedings of the 10th International Conference on Image Processing Theory, Tools and Applications (IPTA), Paris, France, 9 - 12 November 2020. \n \n\n\n\n
\n\n\n\n \n \n \"Investigating link\n  \n \n \n \"Investigating paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{20.ipta.coordconv,\n   author =  "Rosana {El Jurdi} and Thomas Dargent and Caroline Petitjean and Paul Honeine and Fahed Abdallah",\n   title =  "Investigating CoordConv for Fully and Weakly Supervised Medical Image Segmentation",\n   booktitle =  "Proceedings of the 10th International Conference on Image Processing Theory, Tools and Applications (IPTA)",\n   address =  "Paris, France",\n   year  =  "2020",\n   month =  "9 - 12~" # nov,\n   keywords  =  "machine learning, deep learning",\n   acronym =  "IPTA",\n   doi={10.1109/IPTA50016.2020.9286633},\n   url_link = "https://ieeexplore.ieee.org/document/9286633",\n   url_paper  =  "http://honeine.fr/paul/publi/20.ipta.coordconv.pdf",\n   keywords = "Image segmentation, Convolution, Computed tomography, Tools, Convolutional neural networks, Task analysis, Biomedical imaging, Image segmentation, Fully Convolutional Networks, CoordConv, Location Prior, Weakly Supervised Learning, MRI, CT",\n   abstract = "Convolutional neural networks (CNN) have established state-of-the-art performance in computer vision tasks such as object detection and segmentation. One of the major remaining challenge concerns their ability to capture consistent spatial attributes, especially in medical image segmentation. A way to address this issue is through integrating localization prior into system architecture. The CoordConv layers are extensions of convolutional neural network wherein convolution is conditioned on spatial coordinates. This paper investigates CoordConv as a proficient substitute to convolutional layers for organ segmentation in both fully and weakly supervised settings. Experiments are conducted on two public datasets, SegTHOR, which focuses on the segmentation of thoracic organs at risk in computed tomography (CT) images, and ACDC, which addresses ventricular endocardium segmentation of the heart in MR images. We show that if CoordConv does not significantly increase the accuracy with respect to standard convolution, it may interestingly increase model convergence at almost no additional computational cost.",\n}\n\n\n\n
\n
\n\n\n
\n Convolutional neural networks (CNN) have established state-of-the-art performance in computer vision tasks such as object detection and segmentation. One of the major remaining challenge concerns their ability to capture consistent spatial attributes, especially in medical image segmentation. A way to address this issue is through integrating localization prior into system architecture. The CoordConv layers are extensions of convolutional neural network wherein convolution is conditioned on spatial coordinates. This paper investigates CoordConv as a proficient substitute to convolutional layers for organ segmentation in both fully and weakly supervised settings. Experiments are conducted on two public datasets, SegTHOR, which focuses on the segmentation of thoracic organs at risk in computed tomography (CT) images, and ACDC, which addresses ventricular endocardium segmentation of the heart in MR images. We show that if CoordConv does not significantly increase the accuracy with respect to standard convolution, it may interestingly increase model convergence at almost no additional computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n When Spectral Domain Meets Spatial Domain in Graph Neural Networks.\n \n \n \n \n\n\n \n Balcilar, M.; Renton, G.; Héroux, P.; Gaüzère, B.; Adam, S.; and Honeine, P.\n\n\n \n\n\n\n In Proceedings of Thirty-seventh International Conference on Machine Learning (ICML 2020) - Workshop on Graph Representation Learning and Beyond (GRL+ 2020), Vienna, Austria, 12 - 18 July 2020. \n \n\n\n\n
\n\n\n\n \n \n \"When code\n  \n \n \n \"When paper\n  \n \n \n \"When presentation\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{20.icml.gnn,\n  TITLE = {When Spectral Domain Meets Spatial Domain in Graph Neural Networks},\n  AUTHOR = {Balcilar, Muhammet and Renton, Guillaume and H{\\'e}roux, Pierre and Ga{\\"u}z{\\`e}re, Beno{\\^i}t and Adam, S{\\'e}bastien and Honeine, Paul},\n   booktitle =  "Proceedings of Thirty-seventh International Conference on Machine Learning (ICML 2020) - Workshop on Graph Representation Learning and Beyond (GRL+ 2020)",\n   address =  "Vienna, Austria",\n   year  =  "2020",\n   month =  "12 - 18~" # jul,\n   acronym =  "ICML",\n   url_code  =  "https://github.com/balcilar/Spectral-Designed-Graph-Convolutions",\n   url_paper  =  "http://honeine.fr/paul/publi/20.icml.gnn.pdf",\n   url_presentation  =  "https://slideslive.com/38931507/when-spectral-domain-meets-spatial-domain-in-graph-neural-networks",\n   keywords  =  "Convolutional neural networks, graph neural networks, graph data, deep learning, ChebNet, CayleyNet, graph convolution networks, graph attention networks, spatial domain, spectral domain, spectral analysis, eigenanalysis",\n   abstract = "Convolutional Graph Neural Networks (ConvGNNs) are designed either in the spectral do- main or in the spatial domain. In this paper, we provide a theoretical framework to analyze these neural networks, by deriving some equivalence of the graph convolution processes, regardless if they are designed in the spatial or the spectral domain. We demonstrate the relevance of the proposed framework by providing a spectral analysis of the most popular ConvGNNs (ChebNet, CayleyNet, GCN and Graph Attention Networks), which allows to explain their performance and shows their limits."\n}\n\n
\n
\n\n\n
\n Convolutional Graph Neural Networks (ConvGNNs) are designed either in the spectral do- main or in the spatial domain. In this paper, we provide a theoretical framework to analyze these neural networks, by deriving some equivalence of the graph convolution processes, regardless if they are designed in the spatial or the spectral domain. We demonstrate the relevance of the proposed framework by providing a spectral analysis of the most popular ConvGNNs (ChebNet, CayleyNet, GCN and Graph Attention Networks), which allows to explain their performance and shows their limits.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral-designed Depthwise Separable Graph Neural Networks.\n \n \n \n \n\n\n \n Balcilar, M.; Renton, G.; Héroux, P.; Gaüzère, B.; Adam, S.; and Honeine, P.\n\n\n \n\n\n\n In Proceedings of Thirty-seventh International Conference on Machine Learning (ICML 2020) - Workshop on Graph Representation Learning and Beyond (GRL+ 2020), Vienna, Austria, 12 - 18 July 2020. \n \n\n\n\n
\n\n\n\n \n \n \"Spectral-designed code\n  \n \n \n \"Spectral-designed paper\n  \n \n \n \"Spectral-designed presentation\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{20.icml.depthwise,\n  TITLE = {Spectral-designed Depthwise Separable Graph Neural Networks},\n  AUTHOR = {Balcilar, Muhammet and Renton, Guillaume and H{\\'e}roux, Pierre and Ga{\\"u}z{\\`e}re, Beno{\\^i}t and Adam, S{\\'e}bastien and Honeine, Paul},\n   booktitle =  "Proceedings of Thirty-seventh International Conference on Machine Learning (ICML 2020) - Workshop on Graph Representation Learning and Beyond (GRL+ 2020)",\n   address =  "Vienna, Austria",\n   year  =  "2020",\n   month =  "12 - 18~" # jul,\n   acronym =  "ICML",\n   url_code  =  "https://github.com/balcilar/Spectral-Designed-Graph-Convolutions",\n   url_paper  =  "http://honeine.fr/paul/publi/20.icml.depthwise.pdf",\n   url_presentation  =  "https://slideslive.com/38931508/spectraldesigned-depthwise-separable-graph-neural-networks",\n   keywords  =  "Convolutional neural networks, graph neural networks, graph data, deep learning, depthwise separable, transductive learning, inductive learning",\n   abstract = "This paper aims at revisiting Convolutional Graph Neural Networks (ConvGNNs) by designing new graph convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. Within the proposed framework, we propose two ConvGNNs methods: one using a simple single-convolution kernel that operates as a low-pass filter, and one operating multiple convolution kernels called Depthwise Separable Graph Convolution Network (DSGCN). The latter is a generalization of the depthwise separable convolution framework for graph convolutional networks, which allows to decrease the total number of trainable parameters while keeping the capacity of the model unchanged. Our proposals are evaluated on both transductive and inductive graph learning problems, demonstrating that DSGCN outperforms the state-of-the-art methods."\n}\n\n\n\n
\n
\n\n\n
\n This paper aims at revisiting Convolutional Graph Neural Networks (ConvGNNs) by designing new graph convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. Within the proposed framework, we propose two ConvGNNs methods: one using a simple single-convolution kernel that operates as a low-pass filter, and one operating multiple convolution kernels called Depthwise Separable Graph Convolution Network (DSGCN). The latter is a generalization of the depthwise separable convolution framework for graph convolutional networks, which allows to decrease the total number of trainable parameters while keeping the capacity of the model unchanged. Our proposals are evaluated on both transductive and inductive graph learning problems, demonstrating that DSGCN outperforms the state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A comparative study of semantic segmentation using omnidirectional images.\n \n \n \n \n\n\n \n Sekkat, A. R.; Dupuis, Y.; Vasseur, P.; and Honeine, P.\n\n\n \n\n\n\n In Actes du Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFIAP 2020), Vannes, Bretagne, France, 23 - 26 June 2020. \n \n\n\n\n
\n\n\n\n \n \n \"A paper\n  \n \n \n \"A link\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{20.rfia.fisheye,\n   author =  "Ahmed Rida Sekkat and Yohan Dupuis and Pascal Vasseur and Paul Honeine",\n   title =  "A comparative study of semantic segmentation using omnidirectional images",\n   booktitle =  "Actes du Congrès Reconnaissance des Formes, Image, Apprentissage et Perception (RFIAP 2020)",\n   address =  "Vannes, Bretagne, France",\n   year  =  "2020",\n   month =  "23 - 26~" # jun,\n   url_paper  =  "http://honeine.fr/paul/publi/20.rfia.fisheye.pdf",\n   url_link = "https://cap-rfiap2020.sciencesconf.org/data/RFIAP_2020_paper_47.pdf",\n   keywords  =  "Omnidirectional, Equirectangular, Fisheye, Deep Convolutional Neural Networks, Semantic Segmentation, computer vision",\n   acronym =  "RFIA",\n   abstract = "The semantic segmentation of omnidirectional urban driving images is a research topic that has increasingly attracted the attention of researchers. This paper presents a thorough comparative study of different neural network models trained on four different representations: perspective, equirectangular, spherical and fisheye. We use in this study real perspective images, and synthetic perspective, fisheye and equirectangular images, as well as a test set of real fisheye images. We evaluate the performance of convolution on spherical images and perspective images. The conclusions obtained by analyzing the results of this study are multiple and help understanding how different networks learn to deal with omnidirectional distortions. Our main finding is that models trained on omnidirectional images are robust against modality changes and are able to learn a universal representation, giving good results in both perspective and omnidirectional images. The relevance of all results is examined with an analysis of quantitative measures.",\n}\n\n\n
\n
\n\n\n
\n The semantic segmentation of omnidirectional urban driving images is a research topic that has increasingly attracted the attention of researchers. This paper presents a thorough comparative study of different neural network models trained on four different representations: perspective, equirectangular, spherical and fisheye. We use in this study real perspective images, and synthetic perspective, fisheye and equirectangular images, as well as a test set of real fisheye images. We evaluate the performance of convolution on spherical images and perspective images. The conclusions obtained by analyzing the results of this study are multiple and help understanding how different networks learn to deal with omnidirectional distortions. Our main finding is that models trained on omnidirectional images are robust against modality changes and are able to learn a universal representation, giving good results in both perspective and omnidirectional images. The relevance of all results is examined with an analysis of quantitative measures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pixel-wise linear/nonlinear nonnegative matrix factorization for unmixing of hyperspectral data.\n \n \n \n \n\n\n \n Zhu, F.; Honeine, P.; and Chen, J.\n\n\n \n\n\n\n In Proc. 45th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4737-4741, Barcelona, Spain, 4 - 8 May 2020. \n \n\n\n\n
\n\n\n\n \n \n \"Pixel-wise link\n  \n \n \n \"Pixel-wise paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{20.icassp,\n   author =  "Fei Zhu and Paul Honeine and Jie Chen",\n   title =  "Pixel-wise linear/nonlinear nonnegative matrix factorization for unmixing of hyperspectral data",\n   booktitle =  "Proc. 45th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Barcelona, Spain",\n   month =  "4 - 8~" # may,\n   year =  "2020",\n   acronym  =  "ICASSP",\n   pages={4737-4741},\n   doi={10.1109/ICASSP40776.2020.9053239},\n    url_link= "https://ieeexplore.ieee.org/document/9053239",\n    url_paper   =  "http://honeine.fr/paul/publi/20.icassp.hype.pdf",\n    abstract={Nonlinear spectral unmixing is a challenging and important task in hyperspectral image analysis. The kernel-based bi-objective non-negative matrix factorization (Bi-NMF) has shown its usefulness in nonlinear unmixing; However, it suffers several issues that prohibit its practical application. In this work, we propose an unsupervised nonlinear unmixing method that overcomes these weaknesses. Specifically, the new method introduces into each pixel a parameter that adjusts the nonlinearity therein. These parameters are jointly optimized with endmembers and abundances, using a carefully designed objective function by multiplicative update rules. Experiments on synthetic and real datasets confirm the effectiveness of the proposed method.}, \n       keywords={geophysical image processing, hyperspectral imaging, matrix decomposition, hyperspectral data, nonlinear spectral unmixing, hyperspectral image analysis, Bi-NMF, unsupervised nonlinear unmixing method, nonlinearity, objective function, pixel-wise linear nonnegative matrix factorization, pixel-wise nonlinear nonnegative matrix factorization, kernel-based bi-objective nonnegative matrix factorization, multiplicative update rules, Hyperspectral data analysis, nonlinear unmixing, unsupervised learning, kernel methods, nonnegative matrix factorization}, \n}\n\n\n
\n
\n\n\n
\n Nonlinear spectral unmixing is a challenging and important task in hyperspectral image analysis. The kernel-based bi-objective non-negative matrix factorization (Bi-NMF) has shown its usefulness in nonlinear unmixing; However, it suffers several issues that prohibit its practical application. In this work, we propose an unsupervised nonlinear unmixing method that overcomes these weaknesses. Specifically, the new method introduces into each pixel a parameter that adjusts the nonlinearity therein. These parameters are jointly optimized with endmembers and abundances, using a carefully designed objective function by multiplicative update rules. Experiments on synthetic and real datasets confirm the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The OmniScape Dataset.\n \n \n \n \n\n\n \n Sekkat, A. R.; Dupuis, Y.; Vasseur, P.; and Honeine, P.\n\n\n \n\n\n\n In International Conference on Robotics and Automation (ICRA), pages 1603-1608, Paris, France, 31 May–4 June 2020. \n \n\n\n\n
\n\n\n\n \n \n \"The paper\n  \n \n \n \"The link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{20.icra,\n   author =  "Ahmed Rida Sekkat and Yohan Dupuis and Pascal Vasseur and Paul Honeine",\n   title =  "The {OmniScape} Dataset",\n   booktitle =  "International Conference on Robotics and Automation (ICRA)",\n   address =  "Paris, France",\n   year  =  "2020",\n   month =  "31~" # may # "--" # "4~" # jun,\n   url_paper  =  "http://honeine.fr/paul/publi/20.icra.pdf",\n   url_link  =  "https://ieeexplore.ieee.org/document/9197144",\n   pages={1603-1608},\n   doi={10.1109/ICRA40945.2020.9197144},\n   keywords  =  "machine learning, computer vision, deep learning, omnidirectional images, image processing, virtual environment, simulator, fisheye images, catadioptric images, semantic segmentation, depth map, motocycle",\n   acronym =  "ICRA",\n   abstract = "Despite the utility and benefits of omnidirectional images in robotics and automotive applications, there are no dataset of omnidirectional images available with semantic segmentation, depth map, and dynamic properties. This is due to the time cost and human effort required to annotate ground truth images. This paper presents a framework for generating omnidirectional images using images that are acquired from a virtual environment. For this purpose, we demonstrate the relevance of the proposed framework on two well-known simulators: CARLA simulator, which is an open-source simulator for autonomous driving research, and Grand Theft Auto V (GTA V), which is a very high quality video game. We explain in details the generated OmniScape dataset, which includes stereo fisheye and catadioptric images acquired from the two front sides of a motorcycle, including semantic segmentation, depth map, intrinsic parameters of the cameras and the dynamic parameters of the motorcycle. It is worth noting that the case of two-wheeled vehicles is more challenging than cars due to the specific dynamic of these vehicles.",\n}\n\n\n\n
\n
\n\n\n
\n Despite the utility and benefits of omnidirectional images in robotics and automotive applications, there are no dataset of omnidirectional images available with semantic segmentation, depth map, and dynamic properties. This is due to the time cost and human effort required to annotate ground truth images. This paper presents a framework for generating omnidirectional images using images that are acquired from a virtual environment. For this purpose, we demonstrate the relevance of the proposed framework on two well-known simulators: CARLA simulator, which is an open-source simulator for autonomous driving research, and Grand Theft Auto V (GTA V), which is a very high quality video game. We explain in details the generated OmniScape dataset, which includes stereo fisheye and catadioptric images acquired from the two front sides of a motorcycle, including semantic segmentation, depth map, intrinsic parameters of the cameras and the dynamic parameters of the motorcycle. It is worth noting that the case of two-wheeled vehicles is more challenging than cars due to the specific dynamic of these vehicles.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks.\n \n \n \n \n\n\n \n Balcilar, M.; Renton, G.; Héroux, P.; Gaüzère, B.; Adam, S.; and Honeine, P.\n\n\n \n\n\n\n Technical Report HAL Normandie Université, March 2020.\n \n\n\n\n
\n\n\n\n \n \n \"Bridging code\n  \n \n \n \"Bridging link\n  \n \n \n \"Bridging paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@TECHREPORT{20.gnn,\n  title = {{Bridging the Gap Between Spectral and Spatial Domains in Graph Neural Networks}},\n  author = {Balcilar, Muhammet and Renton, Guillaume and H{\\'e}roux, Pierre and Ga{\\"u}z{\\`e}re, Beno{\\^i}t and Adam, S{\\'e}bastien and Honeine, Paul},\n  institution = {HAL Normandie Université},\n  year = {2020},\n  month = Mar,\n  url_code  =  "https://github.com/balcilar/Spectral-Designed-Graph-Convolutions",\n  url_link = {https://hal-normandie-univ.archives-ouvertes.fr/hal-02515637},\n  url_paper = {https://hal-normandie-univ.archives-ouvertes.fr/hal-02515637/file/DSGCN.pdf},\n  HAL_ID = {hal-02515637},\n  HAL_VERSION = {v1},\n  keywords  =  "Convolutional neural networks, graph neural networks, graph data, deep learning, ChebNet, CayleyNet, graph convolution networks, graph attention networks, spatial domain, spectral domain, spectral analysis, eigenanalysis, depthwise separable, transductive learning, inductive learning",\n  abstract = "This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another.",\n}\n\n
\n
\n\n\n
\n This paper aims at revisiting Graph Convolutional Neural Networks by bridging the gap between spectral and spatial design of graph convolutions. We theoretically demonstrate some equivalence of the graph convolution process regardless it is designed in the spatial or the spectral domain. The obtained general framework allows to lead a spectral analysis of the most popular ConvGNNs, explaining their performance and showing their limits. Moreover, the proposed framework is used to design new convolutions in spectral domain with a custom frequency profile while applying them in the spatial domain. We also propose a generalization of the depthwise separable convolution framework for graph convolutional networks, what allows to decrease the total number of trainable parameters by keeping the capacity of the model. To the best of our knowledge, such a framework has never been used in the GNNs literature. Our proposals are evaluated on both transductive and inductive graph learning problems. Obtained results show the relevance of the proposed method and provide one of the first experimental evidence of transferability of spectral filter coefficients from one graph to another.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2019\n \n \n (8)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Decentralized Kernel-based Localization in Wireless Sensor Networks Using Belief Functions.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n IEEE Sensors Journal, 19(11): 4149-4159. June 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Decentralized paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{19.wsn_belief,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  {Decentralized Kernel-based Localization in Wireless Sensor Networks Using Belief Functions},\n   journal =  "IEEE Sensors Journal",\n   year  =  "2019",\n   volume={19},\n   number={11}, \n   pages={4149-4159},\n   ISSN={1530-437X}, \n   month= jun,\n   doi = "10.1109/JSEN.2019.2898106",\n   url_paper   =  "http://honeine.fr/paul/publi/19.wsn_belief.pdf",\n   keywords={Sensors;Calculators;Topology;Databases;Wireless fidelity;Network topology;Wireless sensor networks;Belief functions;decentralized data fusion;fingerprints;kernel density estimation;localization}, \n   abstract={Localization of sensors has become an essential issue in wireless networks. This paper presents a decentralized approach to localize sensors in indoor environments. The targeted area is partitioned into several sectors, each of which having a local calculator capable of emitting, receiving, and processing data. Each calculator runs a local localization algorithm, developed in a belief functions framework, using RSS fingerprinting database, to estimate the sensors zones. The fusion of all calculators estimates yields a final zone estimate. Various decentralized architectures are described, then compared with each other, and against the state-of-the-art. The experimental results using WiFi real measurements show the effectiveness of the proposed approach in terms of localization accuracy, processing time, and complexity.}, \n}\n\n%   doi = "10.1016/j.sigpro.2018.02.021",\n%   url_link= "http://www.sciencedirect.com/science/article/pii/S0165168418300768",\n%   url_paper  =  "http://honeine.fr/paul/publi/18.hierarchical.pdf",\n%   keywords =  "machine learning, wireless sensor networks, belief functions, decision making, error rate, hierarchical clustering, multi-class classification",\n%   abstract = "Classification is one of the most important tasks carried out by intelligent systems. Recent works have proposed deep learning to solve the classification problem. While such techniques achieve a very good performance and reduce the complexity of feature engineering, they require a large amount of data and are extremely computationally expensive to train. This paper presents a new supervised confidence-based classification method for multi-class problems. The method is a hierarchical technique using the belief function theory and feature selection. The method predicts, for a new sample input, a confidence-level for each class. For this purpose, a hierarchical clustering approach is adopted to create a two-level classification problem. A feature selection technique is then carried out at each level to reduce the complexity of the algorithm and enhance the classification performance. The belief function theory is then used to combine all information and to give out decisions, by computing the confidence of the sample being in each class. The proposed method has been tested for indoor localization in a wireless sensors network and for facial image recognition using well-known databases. The obtained results prove the effectiveness of the proposed method and its competence as compared to state-of-the-art methods.",\n%}\n\n\n
\n
\n\n\n
\n Localization of sensors has become an essential issue in wireless networks. This paper presents a decentralized approach to localize sensors in indoor environments. The targeted area is partitioned into several sectors, each of which having a local calculator capable of emitting, receiving, and processing data. Each calculator runs a local localization algorithm, developed in a belief functions framework, using RSS fingerprinting database, to estimate the sensors zones. The fusion of all calculators estimates yields a final zone estimate. Various decentralized architectures are described, then compared with each other, and against the state-of-the-art. The experimental results using WiFi real measurements show the effectiveness of the proposed approach in terms of localization accuracy, processing time, and complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mixed Integer Programming for Sparse Coding: Application to Image Denoising.\n \n \n \n \n\n\n \n Liu, Y.; Canu, S.; Honeine, P.; and Ruan, S.\n\n\n \n\n\n\n IEEE Transactions on Computational Imaging, 5(3): 354-365. September 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Mixed link\n  \n \n \n \"Mixed paper\n  \n \n \n \"Mixed code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@ARTICLE{19.miqp,\n   author =  "Yuan Liu and Stéphane Canu and Paul Honeine and Su Ruan",\n   title =  "Mixed Integer Programming for Sparse Coding: Application to Image Denoising",\n   journal =  "IEEE Transactions on Computational Imaging",\n   year  =  "2019",\n   volume =  "5",\n   number = "3",\n   pages =  "354-365",\n   month =  sep,\n   doi = "10.1109/TCI.2019.2896790",\n   url_link =  "http://dx.doi.org/10.1109/TCI.2019.2896790",\n   url_paper   =  "http://honeine.fr/paul/publi/19.miqp.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/codeMIQP.zip",\n   keywords={Dictionaries; Matching pursuit algorithms; Image coding; Task analysis; Machine learning; Encoding; Image denoising; Mixed-integer quadratic programming; sparse representation; sparse coding; dictionary learning; image denoising; K-SVD},\n   abstract={Dictionary learning for sparse representations is generally conducted in two alternating steps: sparse coding and dictionary updating. In this paper, a new approach to solve the sparse coding step is proposed. Because this step involves an L0-norm, most, if not all existing solutions only provide a local or approximate solution. Instead, a real L0 optimization is considered for the sparse coding problem providing a global solution. The proposed method reformulates the optimization problem as a Mixed-Integer Quadratic Program (MIQP), allowing then to obtain the global optimal solution by using an off-the-shelf optimization software. Because computing time is the main disadvantage of this approach,  two techniques are proposed to improve its computational speed. One is to add suitable constraints and the other to use an appropriate initialization. The results obtained on an image denoising task demonstrate the feasibility of the MIQP approach for processing real images while achieving good performance compared to the most advanced methods.}, \n}\n\n\n\n\n
\n
\n\n\n
\n Dictionary learning for sparse representations is generally conducted in two alternating steps: sparse coding and dictionary updating. In this paper, a new approach to solve the sparse coding step is proposed. Because this step involves an L0-norm, most, if not all existing solutions only provide a local or approximate solution. Instead, a real L0 optimization is considered for the sparse coding problem providing a global solution. The proposed method reformulates the optimization problem as a Mixed-Integer Quadratic Program (MIQP), allowing then to obtain the global optimal solution by using an off-the-shelf optimization software. Because computing time is the main disadvantage of this approach, two techniques are proposed to improve its computational speed. One is to add suitable constraints and the other to use an appropriate initialization. The results obtained on an image denoising task demonstrate the feasibility of the MIQP approach for processing real images while achieving good performance compared to the most advanced methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiple Instance Learning for Histopathological Breast Cancer Image Classification.\n \n \n \n \n\n\n \n Sudharshan, P J; Petitjean, C.; Spanhol, F.; Oliveira, L.; Heutte, L.; and Honeine, P.\n\n\n \n\n\n\n Expert Systems With Applications, 117: 103-111. March 2019.\n \n\n\n\n
\n\n\n\n \n \n \"Multiple link\n  \n \n \n \"Multiple paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{19.mil,\n   author =  "P J Sudharshan and Caroline Petitjean and Fabio Spanhol and Luis Oliveira and Laurent Heutte and Paul Honeine",\n   title =  {Multiple Instance Learning for Histopathological Breast Cancer Image Classification},\n   journal =  "Expert Systems With Applications",\n   year  =  "2019",\n   volume =  "117",\n   pages =  "103-111",\n   month =  mar,\n   url_link= "https://www.sciencedirect.com/science/article/pii/S0957417418306262",\n   doi = "10.1016/j.eswa.2018.09.049",\n   url_paper =  "http://honeine.fr/paul/publi/19.mil.pdf",\n   keywords =  "machine learning, sparsity, deep neural networks, biomedical image processing, breast cancer, histopathology, image classification, multiple instance learning",\n   abstract = "Histopathological images are the gold standard for breast cancer diagnosis. During examination several dozens of them are acquired for a single patient. Conventional, image-based classification systems make the assumption that all the patient's images have the same label as the patient, which is rarely verified in practice since labeling the data is expensive. We propose a weakly supervised learning framework and investigate the relevance of Multiple Instance Learning (MIL) for computer-aided diagnosis of breast cancer patients, based on the analysis of histopathological images. Multiple instance learning consists in organizing instances (images) into bags (patients), without the need to label all the instances. We compare several state-of-the-art MIL methods including the pioneering ones (APR, Diverse Density, MI-SVM, citation-kNN), and more recent ones such as a non parametric method and a deep learning based approach (MIL-CNN). The experiments are conducted on the public BreaKHis dataset which contains about 8000 microscopic biopsy images of benign and malignant breast tumors, originating from 82 patients. Among the MIL methods the non-parametric approach has the best overall results, and in some cases allows to obtain classification rates never reached by conventional (single instance) classification frameworks. The comparison between MIL and single instance classification reveals the relevance of the MIL paradigm for the task at hand. In particular, the MIL allows to obtain comparable or better results than conventional (single instance) classification without the need to label all the images.",\n} \n%embargo 24 months\n\n\n
\n
\n\n\n
\n Histopathological images are the gold standard for breast cancer diagnosis. During examination several dozens of them are acquired for a single patient. Conventional, image-based classification systems make the assumption that all the patient's images have the same label as the patient, which is rarely verified in practice since labeling the data is expensive. We propose a weakly supervised learning framework and investigate the relevance of Multiple Instance Learning (MIL) for computer-aided diagnosis of breast cancer patients, based on the analysis of histopathological images. Multiple instance learning consists in organizing instances (images) into bags (patients), without the need to label all the instances. We compare several state-of-the-art MIL methods including the pioneering ones (APR, Diverse Density, MI-SVM, citation-kNN), and more recent ones such as a non parametric method and a deep learning based approach (MIL-CNN). The experiments are conducted on the public BreaKHis dataset which contains about 8000 microscopic biopsy images of benign and malignant breast tumors, originating from 82 patients. Among the MIL methods the non-parametric approach has the best overall results, and in some cases allows to obtain classification rates never reached by conventional (single instance) classification frameworks. The comparison between MIL and single instance classification reveals the relevance of the MIL paradigm for the task at hand. In particular, the MIL allows to obtain comparable or better results than conventional (single instance) classification without the need to label all the images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance Evaluation of State-of-the-art Filtering Criteria Applied to SIFT Features.\n \n \n \n \n\n\n \n Konlambigue, S.; Pothin, J.; Honeine, P.; and Bensrhair, A.\n\n\n \n\n\n\n In Proc. 19th IEEE International Symposium on Signal Processing and Information Technology (ISSPIT), Ajman, United Arab Emirates, 10 - 12 December 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Performance paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{19.isspit.vision,\n   author =  "Silvère Konlambigue and Jean-Baptiste Pothin and Paul Honeine and Abdelaziz Bensrhair",\n   title =  "Performance Evaluation of State-of-the-art Filtering Criteria Applied to {SIFT} Features",\n   booktitle =  "Proc. 19th IEEE International Symposium on Signal Processing and Information Technology (ISSPIT)",\n   address =  "Ajman, United Arab Emirates",\n   year  =  "2019",\n   month =  "10 - 12~" # dec,\n   keywords =  "machine learning, computer vision",\n   acronym =  "ISSPIT",\n   url_paper  =  "http://honeine.fr/paul/publi/19.isspit.vision.pdf",\n   keywords={Computer vision, SIFT, matching, ratio criterion}, \n   abstract = "Unlike the matching strategy of minimizing dissimilarity measure between descriptors, Lowe, while introducing the SIFT-method, suggested a more effective matching strategy using the ratio between the nearest and the second nearest neighbor. It leads to excellent matching accuracy. Unlike all these strategies that rely on deterministic formalism, some researchers have recently opted for statistical analysis of the matching process. The cornerstone of this formalism exploits the Markov inequality and the ratio criterion has been interpreted as an upper bound on the probability that a match do not belong to the background distribution. In this paper, we first examine some of the assumptions and methods used in these works and demonstrate their inconsistencies. And then, we propose improvements by refining the bound, by providing a tighter bound on that probability. The fact that the ratio criterion is an upper bound indicates that refining the bound reduces the probability that the established matches come from the background. Experiments on the well-known Oxford-5k and Paris-6k datasets show performance improvement for the image retrieval application."\n}\n%   pages    =  "400-404",\n%   url_link= "https://ieeexplore.ieee.org/document/8553321",\n%   doi = "10.23919/EUSIPCO.2018.8553321", \n\n\n\n
\n
\n\n\n
\n Unlike the matching strategy of minimizing dissimilarity measure between descriptors, Lowe, while introducing the SIFT-method, suggested a more effective matching strategy using the ratio between the nearest and the second nearest neighbor. It leads to excellent matching accuracy. Unlike all these strategies that rely on deterministic formalism, some researchers have recently opted for statistical analysis of the matching process. The cornerstone of this formalism exploits the Markov inequality and the ratio criterion has been interpreted as an upper bound on the probability that a match do not belong to the background distribution. In this paper, we first examine some of the assumptions and methods used in these works and demonstrate their inconsistencies. And then, we propose improvements by refining the bound, by providing a tighter bound on that probability. The fact that the ratio criterion is an upper bound indicates that refining the bound reduces the probability that the established matches come from the background. Experiments on the well-known Oxford-5k and Paris-6k datasets show performance improvement for the image retrieval application.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Génération d'images omnidirectionnelles à partir d'un environnement virtuel.\n \n \n \n \n\n\n \n Sekkat, A. R.; Dupuis, Y.; Vasseur, P.; and Honeine, P.\n\n\n \n\n\n\n In Actes du 27-ème Colloque GRETSI sur le Traitement du Signal et des Images, Lille, France, 26 - 29 August 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Génération paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{19.gretsi.fisheye,\n   author =  "Ahmed Rida Sekkat and Yohan Dupuis and Pascal Vasseur and Paul Honeine",\n   title =  "Génération d'images omnidirectionnelles à partir d'un environnement virtuel",\n   booktitle =  "Actes du 27-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Lille, France",\n   year  =  "2019",\n   month =  "26 - 29~" # aug,\n   keywords  =  "machine learning, computer vision, deep learning",\n   acronym =  "GRETSI'19",\n   url_paper  =  "http://honeine.fr/paul/publi/19.gretsi.fisheye.pdf",\n   abstract = "Dans cet article, nous décrivons une méthode pour générer des images omnidirectionnelles en utilisant des images cubemap et les cartes de profondeur correspondantes, à partir d’un environnement virtuel. Pour l’acquisition, on utilise le jeu vidéo Grand Theft Auto V (GTA V). GTA V a été utilisé comme source de données dans plusieurs travaux de recherche, puisque c’est un jeu à monde ouvert, hyperréaliste, simulant une vraie ville. L’avancée réalisée dans l’ingénierie inverse de ce jeu nous offre la possibilité d’extraire des images et les cartes de profondeur correspondantes avec des caméras virtuelles à six degrés de liberté. A partir de ces données et d’un modèle de caméra omnidirectionnelle, on propose de générer des images Fisheye destinées par exemple à l’entraînement de méthodes par apprentissage.",\n}\n\n\n
\n
\n\n\n
\n Dans cet article, nous décrivons une méthode pour générer des images omnidirectionnelles en utilisant des images cubemap et les cartes de profondeur correspondantes, à partir d’un environnement virtuel. Pour l’acquisition, on utilise le jeu vidéo Grand Theft Auto V (GTA V). GTA V a été utilisé comme source de données dans plusieurs travaux de recherche, puisque c’est un jeu à monde ouvert, hyperréaliste, simulant une vraie ville. L’avancée réalisée dans l’ingénierie inverse de ce jeu nous offre la possibilité d’extraire des images et les cartes de profondeur correspondantes avec des caméras virtuelles à six degrés de liberté. A partir de ces données et d’un modèle de caméra omnidirectionnelle, on propose de générer des images Fisheye destinées par exemple à l’entraînement de méthodes par apprentissage.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Organ Segmentation in CT Images With Weak Annotations: A Preliminary Study.\n \n \n \n \n\n\n \n El Jurdi, R.; Petitjean, C.; Honeine, P.; and Abdallah, F.\n\n\n \n\n\n\n In Actes du 27-ème Colloque GRETSI sur le Traitement du Signal et des Images, Lille, France, 26 - 29 August 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Organ paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{19.gretsi.weak,\n   author =  "Rosana {El Jurdi} and Caroline Petitjean and Paul Honeine and Fahed Abdallah",\n   title =  "Organ Segmentation in {CT} Images With Weak Annotations: A Preliminary Study",\n   booktitle =  "Actes du 27-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Lille, France",\n   year  =  "2019",\n   month =  "26 - 29~" # aug,\n   keywords  =  "machine learning, deep learning",\n   acronym =  "GRETSI'19",\n   url_paper  =  "http://honeine.fr/paul/publi/19.gretsi.weak.pdf",\n   abstract = "Medical image segmentation has unprecedented challenges, compared to natural image segmentation, in particular because of the scarcity of annotated datasets. Of particular interest is the ongoing 2019 SegTHOR competition, which consists in Segmenting THoracic Organs at Risk in CT images. While the fully supervised framework (i.e., pixel-level annotation) is considered in this competition, this paper seeks to push forward the competition to a new paradigm: weakly supervised segmentation, namely training with only bounding boxes that enclose the organs. After a pre-processing step, the proposed method applies the GrabCut algorithm in order to transforms the images into pixel-level annotated ones. And then a deep neural network is trained on the medical images, where several segmentation loss functions are examined. Experiments show the relevance of the proposed method, providing comparable results to the ongoing fully supervised segmentation competition.",\n}\n\n\n
\n
\n\n\n
\n Medical image segmentation has unprecedented challenges, compared to natural image segmentation, in particular because of the scarcity of annotated datasets. Of particular interest is the ongoing 2019 SegTHOR competition, which consists in Segmenting THoracic Organs at Risk in CT images. While the fully supervised framework (i.e., pixel-level annotation) is considered in this competition, this paper seeks to push forward the competition to a new paradigm: weakly supervised segmentation, namely training with only bounding boxes that enclose the organs. After a pre-processing step, the proposed method applies the GrabCut algorithm in order to transforms the images into pixel-level annotated ones. And then a deep neural network is trained on the medical images, where several segmentation loss functions are examined. Experiments show the relevance of the proposed method, providing comparable results to the ongoing fully supervised segmentation competition.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Apprentissage de dictionnaire faiblement cohérent par programmation quadratique mixte.\n \n \n \n \n\n\n \n Liu, Y.; Canu, S.; Honeine, P.; and Ruan, S.\n\n\n \n\n\n\n In Actes du 27-ème Colloque GRETSI sur le Traitement du Signal et des Images, Lille, France, 26 - 29 August 2019. \n \n\n\n\n
\n\n\n\n \n \n \"Apprentissage paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{19.gretsi.dictionnaire,\n   author =  "Yuan Liu and Stéphane Canu and Paul Honeine and Su Ruan",\n   title =  "Apprentissage de dictionnaire faiblement cohérent par programmation quadratique mixte",\n   booktitle =  "Actes du 27-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Lille, France",\n   year  =  "2019",\n   month =  "26 - 29~" # aug,\n   keywords  =  "machine learning, sparsity",\n   acronym =  "GRETSI'19",\n   url_paper  =  "http://honeine.fr/paul/publi/19.gretsi.dictionnaire.pdf",\n   abstract = "L’apprentissage de dictionnaire a permis des avancées considérables en représentations parcimonieuses. Il a été investi avec succès dans un large spectre d’applications en traitement du signal et des images, ainsi qu’en vision et reconnaissance des formes. Plusieurs études théoriques ont montré la pertinence de construire un dictionnaire à faible cohérence, c’est à dire faible corrélation entre ses éléments. Le problème d’optimisation associé est non convexe et non différentiable. Les méthodes qui s’y attaquent reposent sur la relaxation du problème, par exemple en ajoutant une étape de décorrélation à chaque itération. Dans cet article, nous proposons une méthode qui résout le problème avec les contraintes explicites. Pour cela, le sous-problème de codage parcimonieux est traité selon deux stratégies, par algorithme proximal ou par programme quadratique mixte en nombres entiers (MIQP). L’estimation du dictionnaire sous contraintes est abordée en combinant la méthode du lagrangien augmenté (ADMM) et la méthode Extended Proximal Alternating Linearized Minimization (EPALM), adaptée à des familles de problèmes non convexes. L’efficacité de la méthode MIQP+EPALM est démontrée en reconstruction d’image.",\n}\n\n\n\n\n\n
\n
\n\n\n
\n L’apprentissage de dictionnaire a permis des avancées considérables en représentations parcimonieuses. Il a été investi avec succès dans un large spectre d’applications en traitement du signal et des images, ainsi qu’en vision et reconnaissance des formes. Plusieurs études théoriques ont montré la pertinence de construire un dictionnaire à faible cohérence, c’est à dire faible corrélation entre ses éléments. Le problème d’optimisation associé est non convexe et non différentiable. Les méthodes qui s’y attaquent reposent sur la relaxation du problème, par exemple en ajoutant une étape de décorrélation à chaque itération. Dans cet article, nous proposons une méthode qui résout le problème avec les contraintes explicites. Pour cela, le sous-problème de codage parcimonieux est traité selon deux stratégies, par algorithme proximal ou par programme quadratique mixte en nombres entiers (MIQP). L’estimation du dictionnaire sous contraintes est abordée en combinant la méthode du lagrangien augmenté (ADMM) et la méthode Extended Proximal Alternating Linearized Minimization (EPALM), adaptée à des familles de problèmes non convexes. L’efficacité de la méthode MIQP+EPALM est démontrée en reconstruction d’image.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Hidden Markov Model for Indoor Trajectory Tracking of Elderly People.\n \n \n \n \n\n\n \n AlShamaa, D.; Chkeir, A.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 14th IEEE Sensors Applications Symposium (SAS), Sophia Antipolis, France, 11 - 13 March 2019. \n \n\n\n\n
\n\n\n\n \n \n paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{19.sas,\n   author =  "Daniel AlShamaa and Aly Chkeir and Farah Chehade and Paul Honeine",\n   title =  "A Hidden Markov Model for Indoor Trajectory Tracking of Elderly People",\n   booktitle =  "Proc. 14th IEEE Sensors Applications Symposium (SAS)",\n   year  =  "2019",\n   month =  "11 - 13~" # mar,\n   address = "Sophia Antipolis, France",\n   acronym =  "SAS",\n   url_paper  =  "http://honeine.fr/paul/publi/19.sas.pdf",\n   keywords = "elderly people, hidden Markov models, mobility, tracking, WiFi, RSSI",\n   abstract="Tracking of elderly people is indispensable to assist them as fast as possible. In this paper, we propose a new trajectory tracking technique to localize elderly people in real time in indoor environments. A mobility model is constructed, based on the hidden Markov models, to estimate the trajectory followed by each person. However, mobility models can not be used as standalone tracking techniques due to accumulation of error with time. For that reason, the proposed mobility model is combined with measurements from the network. Here, we use the power of the WiFi signals received from surrounding Access Points installed in the building. The combination between the mobility model and the measurements result in tracking of elderly people. Real experiments are realized to evaluate the performance of the proposed approach.",\n}%    isbn="978-3-319-99383-6",\n%  pages = "10--13",\n%   publisher = "Springer International Publishing",\n%    editor="Destercke, S{\\'e}bastien and Denoeux, Thierry and Cuzzolin, Fabio and Martin, Arnaud",   \n% address =  "Compiègne, France",\n%   url_link= "https://link.springer.com/chapter/10.1007%2F978-3-319-99383-6_2",\n\n\n
\n
\n\n\n
\n Tracking of elderly people is indispensable to assist them as fast as possible. In this paper, we propose a new trajectory tracking technique to localize elderly people in real time in indoor environments. A mobility model is constructed, based on the hidden Markov models, to estimate the trajectory followed by each person. However, mobility models can not be used as standalone tracking techniques due to accumulation of error with time. For that reason, the proposed mobility model is combined with measurements from the network. Here, we use the power of the WiFi signals received from surrounding Access Points installed in the building. The combination between the mobility model and the measurements result in tracking of elderly people. Real experiments are realized to evaluate the performance of the proposed approach.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2018\n \n \n (15)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Correntropy-based Robust Multilayer Extreme Learning Machines.\n \n \n \n \n\n\n \n Liangjun, C.; Honeine, P.; Hua, Q.; Jihong, Z.; and Xia, S.\n\n\n \n\n\n\n Pattern Recognition, 84: 357 - 370. December 2018.\n \n\n\n\n
\n\n\n\n \n \n \"Correntropy-based link\n  \n \n \n \"Correntropy-based paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{18.fcmelm,\n   author =  "Chen Liangjun and Paul Honeine and Qu Hua and Zhao Jihong and Sun Xia",\n   title =  {Correntropy-based Robust Multilayer Extreme Learning Machines},\n   journal =  "Pattern Recognition",\n   year  =  "2018",\n   volume =  "84",\n   pages =  "357 - 370",\n   month =  dec,\n   doi = "10.1016/j.patcog.2018.07.011",\n   url_link= "https://www.sciencedirect.com/science/article/pii/S0031320318302401",\n   url_paper   =  "http://honeine.fr/paul/publi/18.fcmelm.pdf",\n   keywords =  "machine learning, sparsity, deep neural networks, deep learning, extreme learning machine, correntropy, unsupervised feature learning, computer aided cancer diagnosis",\n   abstract = "In extreme learning machines (ELM), the hidden node parameters are randomly generated and the output weights can be analytically computed. To overcome the bad feature extraction ability of the shallow architecture of ELM, the hierarchical ELM has been extensively studied as a deep architecture with multilayer neural network. However, the commonly used mean square error (MSE) criterion is very sensitive to outliers and impulsive noises, generally existing in real world data. In this paper, we investigate the correntropy to improve the robustness of the multilayer ELM and provide sparser representation. The correntropy, as a nonlinear measure of similarity, is robust to outliers and can approximate different norms (from ?0 to ?2). A new full correntropy based multilayer extreme learning machine (FC-MELM) algorithm is proposed to handle the classification of datasets which are corrupted by impulsive noises or outliers. The contributions of this paper are three-folds: (1) The MSE based reconstruction loss is replaced by the correntropy based loss function; In this way, the robustness of the ELM based multilayer algorithms is enhanced. (2) The traditional ?1-based sparsity penalty term is also replaced by a correntropy-based sparsity penalty term, which can further improve the performance of the proposed algorithm with a sparser representation of the data. The combination of (1) and (2) provides the correntropy-based ELM autoencoder. (3) The FC-MELM is proposed by using the correntropy-based ELM autoencoder as a building block. It is notable that the FC-MELM is trained in a forward manner, which means fine-tuning procedure is not required. Thus, the FC-MELM has great advantage in learning efficiently when compared with traditional deep learning algorithms. The good property of the proposed algorithm is confirmed by the experiments on well-known benchmark datasets, including the MNIST datasets, the NYU Object Recognition Benchmark dataset, and the Moore network traffic dataset. Finally, the proposed FC-MELM algorithm is applied to address Computer Aided Cancer Diagnosis. Experiments conducted on the well-known Wisconsin Breast Cancer Data (Diagnostic) dataset are presented and show that the proposed FC-MELM outperforms state-of-the-art methods in solving computer aided cancer diagnosis problems.",\n}\n\n
\n
\n\n\n
\n In extreme learning machines (ELM), the hidden node parameters are randomly generated and the output weights can be analytically computed. To overcome the bad feature extraction ability of the shallow architecture of ELM, the hierarchical ELM has been extensively studied as a deep architecture with multilayer neural network. However, the commonly used mean square error (MSE) criterion is very sensitive to outliers and impulsive noises, generally existing in real world data. In this paper, we investigate the correntropy to improve the robustness of the multilayer ELM and provide sparser representation. The correntropy, as a nonlinear measure of similarity, is robust to outliers and can approximate different norms (from ?0 to ?2). A new full correntropy based multilayer extreme learning machine (FC-MELM) algorithm is proposed to handle the classification of datasets which are corrupted by impulsive noises or outliers. The contributions of this paper are three-folds: (1) The MSE based reconstruction loss is replaced by the correntropy based loss function; In this way, the robustness of the ELM based multilayer algorithms is enhanced. (2) The traditional ?1-based sparsity penalty term is also replaced by a correntropy-based sparsity penalty term, which can further improve the performance of the proposed algorithm with a sparser representation of the data. The combination of (1) and (2) provides the correntropy-based ELM autoencoder. (3) The FC-MELM is proposed by using the correntropy-based ELM autoencoder as a building block. It is notable that the FC-MELM is trained in a forward manner, which means fine-tuning procedure is not required. Thus, the FC-MELM has great advantage in learning efficiently when compared with traditional deep learning algorithms. The good property of the proposed algorithm is confirmed by the experiments on well-known benchmark datasets, including the MNIST datasets, the NYU Object Recognition Benchmark dataset, and the Moore network traffic dataset. Finally, the proposed FC-MELM algorithm is applied to address Computer Aided Cancer Diagnosis. Experiments conducted on the well-known Wisconsin Breast Cancer Data (Diagnostic) dataset are presented and show that the proposed FC-MELM outperforms state-of-the-art methods in solving computer aided cancer diagnosis problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Hierarchical Classification Method Using Belief Functions.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n Signal Processing, 148: 68 - 77. July 2018.\n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{18.hierarchical,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  {A Hierarchical Classification Method Using Belief Functions},\n   journal =  "Signal Processing",\n   year  =  "2018",\n   volume =  "148",\n   pages =  "68 - 77",\n   month =  jul,\n   doi = "10.1016/j.sigpro.2018.02.021",\n   url_link= "http://www.sciencedirect.com/science/article/pii/S0165168418300768",\n   url_paper  =  "http://honeine.fr/paul/publi/18.hierarchical.pdf",\n   keywords =  "machine learning, wireless sensor networks, belief functions, decision making, error rate, hierarchical clustering, multi-class classification",\n   abstract = "Classification is one of the most important tasks carried out by intelligent systems. Recent works have proposed deep learning to solve the classification problem. While such techniques achieve a very good performance and reduce the complexity of feature engineering, they require a large amount of data and are extremely computationally expensive to train. This paper presents a new supervised confidence-based classification method for multi-class problems. The method is a hierarchical technique using the belief function theory and feature selection. The method predicts, for a new sample input, a confidence-level for each class. For this purpose, a hierarchical clustering approach is adopted to create a two-level classification problem. A feature selection technique is then carried out at each level to reduce the complexity of the algorithm and enhance the classification performance. The belief function theory is then used to combine all information and to give out decisions, by computing the confidence of the sample being in each class. The proposed method has been tested for indoor localization in a wireless sensors network and for facial image recognition using well-known databases. The obtained results prove the effectiveness of the proposed method and its competence as compared to state-of-the-art methods.",\n}\n\n
\n
\n\n\n
\n Classification is one of the most important tasks carried out by intelligent systems. Recent works have proposed deep learning to solve the classification problem. While such techniques achieve a very good performance and reduce the complexity of feature engineering, they require a large amount of data and are extremely computationally expensive to train. This paper presents a new supervised confidence-based classification method for multi-class problems. The method is a hierarchical technique using the belief function theory and feature selection. The method predicts, for a new sample input, a confidence-level for each class. For this purpose, a hierarchical clustering approach is adopted to create a two-level classification problem. A feature selection technique is then carried out at each level to reduce the complexity of the algorithm and enhance the classification performance. The belief function theory is then used to combine all information and to give out decisions, by computing the confidence of the sample being in each class. The proposed method has been tested for indoor localization in a wireless sensors network and for facial image recognition using well-known databases. The obtained results prove the effectiveness of the proposed method and its competence as compared to state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n One-Class Classification Framework Based on Shrinkage Methods.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n Journal of Signal Processing Systems, 90(3): 341 - 356. March 2018.\n \n\n\n\n
\n\n\n\n \n \n \"One-Class link\n  \n \n \n \"One-Class paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{18.oneclass_shrinkage,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "One-Class Classification Framework Based on Shrinkage Methods",\n   journal =  "Journal of Signal Processing Systems",\n   year  =  "2018",\n   volume="90",\n   number="3",\n   pages="341 - 356",\n   month =  mar,\n   doi = "10.1007/s11265-017-1240-z",\n   url_link = "https://link.springer.com/article/10.1007/s11265-017-1240-z",\n   url_paper  =  "http://honeine.fr/paul/publi/18.oneclass_shrinkage.pdf",\n   keywords =  "machine learning, one-class, sparsity, cybersecurity, kernel methods, one-class classification, shrinkage methods, sparse approximation, truncated Mahalanobis distance ",\n   abstract = "Statistical machine learning, such as kernel methods, have been widely used to discover hidden regularities and patterns in data. In particular, one-class classification algorithms gained a lot of interest in a large number of applications where the only available data designate a unique class, as in industrial processes. In this paper, we propose a sparse framework for one-class classification problems, by investigating the hypersphere enclosing the samples in a Reproducing Kernel Hilbert Space (RKHS). The center of this hypersphere is approximated using a sparse solution, by selecting an appropriate set of relevant samples. For this purpose, we investigate well-known shrinkage methods, namely Least Angle Regression, Least Absolute Shrinkage and Selection Operator, and Elastic Net. We revisit these methods and adapt their algorithms for estimating the sparse center in the RKHS. The proposed framework is extended to include the truncated Mahalanobis distance, which is necessary when dealing with heterogenous input variables. We also provide some theoretical results on the projection error and on the error of the first kind. The proposed algorithms are compared with well-known one-class classification approaches, with experiments conducted on simulated and real datasets.",\n}\n\n
\n
\n\n\n
\n Statistical machine learning, such as kernel methods, have been widely used to discover hidden regularities and patterns in data. In particular, one-class classification algorithms gained a lot of interest in a large number of applications where the only available data designate a unique class, as in industrial processes. In this paper, we propose a sparse framework for one-class classification problems, by investigating the hypersphere enclosing the samples in a Reproducing Kernel Hilbert Space (RKHS). The center of this hypersphere is approximated using a sparse solution, by selecting an appropriate set of relevant samples. For this purpose, we investigate well-known shrinkage methods, namely Least Angle Regression, Least Absolute Shrinkage and Selection Operator, and Elastic Net. We revisit these methods and adapt their algorithms for estimating the sparse center in the RKHS. The proposed framework is extended to include the truncated Mahalanobis distance, which is necessary when dealing with heterogenous input variables. We also provide some theoretical results on the projection error and on the error of the first kind. The proposed algorithms are compared with well-known one-class classification approaches, with experiments conducted on simulated and real datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tracking of Mobile Sensors Using Belief Functions in Indoor Wireless Networks.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n IEEE Sensors Journal, 18(1): 310-319. January 2018.\n \n\n\n\n
\n\n\n\n \n \n \"Tracking link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{18.tracking,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  {Tracking of Mobile Sensors Using Belief Functions in Indoor Wireless Networks},\n   journal =  "IEEE Sensors Journal",\n   year  =  "2018",\n   volume =  "18",\n   number =  "1",\n   pages =  "310-319",\n   month =  jan,\n   url_link =  "http://ieeexplore.ieee.org/document/8085100/",\n   doi  = "10.1109/JSEN.2017.2766630",\n   keywords =  "machine learning, wireless sensor networks, belief networks, indoor radio, mobile radio, sensor placement, target tracking, wireless sensor networks, mobile sensor tracking, mobile sensor localization, indoor wireless sensor network, indoor localization scheme, zoning-based tracking technique, indoor environment, belief function framework, Wi-Fi signal strength, hierarchical clustering, access point selection, Sensors, Mobile communication, Wireless fidelity, Indoor environments, Databases, Wireless sensor networks, Target tracking, Access point selection, belief functions, hierarchical clustering, mobility, tracking, WiFi signals",\n   abstract = "Localization of mobile sensors is an important research issue in wireless sensor networks. Most indoor localization schemes focus on determining the exact position of these sensors. This paper presents a zoning-based tracking technique that works efficiently in indoor environments. The targeted area is composed of several zones, the objective being to determine the zone of the mobile sensor in a real time tracking process. The proposed method creates a belief functions framework that combines evidence using the sensors mobility and observations. To do this, a mobility model is proposed by using the previous state of the sensor and its assumed maximum speed. Also, an observation model is constructed based on fingerprints collected as Wi-Fi signals strengths received from surrounding access points. This model can be extended via hierarchical clustering and access point selection. Real experiments demonstrate the effectiveness of this approach and its competence compared with state-of-the-art methods.",\n}\n\n
\n
\n\n\n
\n Localization of mobile sensors is an important research issue in wireless sensor networks. Most indoor localization schemes focus on determining the exact position of these sensors. This paper presents a zoning-based tracking technique that works efficiently in indoor environments. The targeted area is composed of several zones, the objective being to determine the zone of the mobile sensor in a real time tracking process. The proposed method creates a belief functions framework that combines evidence using the sensors mobility and observations. To do this, a mobility model is proposed by using the previous state of the sensor and its assumed maximum speed. Also, an observation model is constructed based on fingerprints collected as Wi-Fi signals strengths received from surrounding access points. This model can be extended via hierarchical clustering and access point selection. Real experiments demonstrate the effectiveness of this approach and its competence compared with state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n K-SVD with a real L0 optimization: application to image denoising.\n \n \n \n \n\n\n \n Liu, Y.; Canu, S.; Honeine, P.; and Ruan, S.\n\n\n \n\n\n\n In Proc. 28th IEEE workshop on Machine Learning for Signal Processing (MLSP), pages 1 - 6, Aalborg, Danemark, 17 - 20 September 2018. \n \n\n\n\n
\n\n\n\n \n \n \"K-SVD link\n  \n \n \n \"K-SVD paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.mlsp.miqp,\n   author =  "Yuan Liu and Stéphane Canu and Paul Honeine and Su Ruan",\n   title =  "{K-SVD} with a real {L0} optimization: application to image denoising",\n   booktitle =  "Proc. 28th IEEE workshop on Machine Learning for Signal Processing (MLSP)",\n   address =  "Aalborg, Danemark",\n   year =  "2018",\n   month =  "17 - 20~" # sep,\n   pages = "1 - 6",\n   acronym =  "MLSP",\n   doi={10.1109/MLSP.2018.8517064},\n   url_link= "https://ieeexplore.ieee.org/document/8517064",\n   url_paper  =  "http://honeine.fr/paul/publi/18.mlsp.miqp.pdf",\n   keywords =  "machine learning, sparsity, image coding, image denoising, image representation, integer programming, quadratic programming, singular value decomposition, sparse coding problem, global optimal solution, optimization problem, Mixed-Integer Quadratic Program, image denoising task, dictionary learning, sparse representations, MIQP approach, off-the-shelf solver, real L0 optimization, K-SVD, Image coding, Optimization, Dictionaries, Machine learning, Matching pursuit algorithms, Image denoising, Mixed-integer quadratic programming, sparse representation, sparse coding, dictionary learning, image denoising, K-SVD",\n   abstract={This paper deals with sparse coding for dictionary learning in sparse representations. Because sparse coding involves an L0-norm, most, if not all, existing solutions only provide an approximate solution. Instead, in this paper, a real L0 optimization is considered for the sparse coding problem providing a global optimal solution. The proposed method reformulates the optimization problem as a Mixed-Integer Quadratic Program (MIQP), allowing then to obtain the global optimal solution by using an off-the-shelf solver. Because computing time is the main disadvantage of this approach, two techniques are proposed to improve its computational speed. One is to add suitable constraints and the other to use an appropriate initialization. The results obtained on an image denoising task demonstrate the feasibility of the MIQP approach for processing well-known benchmark images while achieving good performance compared with the most advanced methods.}, \n}\n%   url_code  =  "http://www.honeine.fr/paul/publi/18.mlsp.miqp.zip",\n\n
\n
\n\n\n
\n This paper deals with sparse coding for dictionary learning in sparse representations. Because sparse coding involves an L0-norm, most, if not all, existing solutions only provide an approximate solution. Instead, in this paper, a real L0 optimization is considered for the sparse coding problem providing a global optimal solution. The proposed method reformulates the optimization problem as a Mixed-Integer Quadratic Program (MIQP), allowing then to obtain the global optimal solution by using an off-the-shelf solver. Because computing time is the main disadvantage of this approach, two techniques are proposed to improve its computational speed. One is to add suitable constraints and the other to use an appropriate initialization. The results obtained on an image denoising task demonstrate the feasibility of the MIQP approach for processing well-known benchmark images while achieving good performance compared with the most advanced methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Belief Functions Theory for Sensors Localization in Indoor Wireless Networks.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In Destercke, S.; Denoeux, T.; Cuzzolin, F.; and Martin, A., editor(s), Proc. 5th International Conference on Belief Functions (BELIEF 2018): Belief Functions: Theory and Applications, pages 10–13, Compiègne, France, 17 - 21 September 2018. Springer International Publishing\n \n\n\n\n
\n\n\n\n \n \n \"The paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.belief,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  "The Belief Functions Theory for Sensors Localization in Indoor Wireless Networks",\n   editor="Destercke, S{\\'e}bastien and Denoeux, Thierry and Cuzzolin, Fabio and Martin, Arnaud",\n   booktitle =  "Proc. 5th International Conference on Belief Functions (BELIEF 2018): Belief Functions: Theory and Applications",\n   year  =  "2018",\n   month =  "17 - 21~" # sep,\n   publisher = "Springer International Publishing",\n   address = "Compiègne, France",\n   pages = "10--13",\n   keywords =  "machine learning, wireless sensor networks",\n   acronym =  "BELIEF",\n   isbn="978-3-319-99383-6",\n   url_paper  =  "http://honeine.fr/paul/publi/18.belief.pdf",\n   abstract="This paper investigates the usage of the belief functions theory to localize sensors in indoor environments. The problem is tackled as a zoning localization where the objective is to determine the zone where the mobile sensor resides at any instant. The proposed approach uses the belief functions theory to define an evidence framework, for estimating the most probable sensor's zone. Real experiments demonstrate the effectiveness of this approach as compared to other localization methods.",\n}%   address =  "Compiègne, France",\n%   url_link= "https://link.springer.com/chapter/10.1007%2F978-3-319-99383-6_2",\n\n\n
\n
\n\n\n
\n This paper investigates the usage of the belief functions theory to localize sensors in indoor environments. The problem is tackled as a zoning localization where the objective is to determine the zone where the mobile sensor resides at any instant. The proposed approach uses the belief functions theory to define an evidence framework, for estimating the most probable sensor's zone. Real experiments demonstrate the effectiveness of this approach as compared to other localization methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast and Accurate Gaussian Pyramid Construction by Extended Box Filtering.\n \n \n \n \n\n\n \n Konlambigue, S.; Pothin, J.; Honeine, P.; and Bensrhair, A.\n\n\n \n\n\n\n In Proc. 25rd European Conference on Signal Processing (EUSIPCO), pages 400-404, Rome, Italy, 3 - 7 September 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Fast paper\n  \n \n \n \"Fast link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.eusipco.vision,\n   author =  "Silvère Konlambigue and Jean-Baptiste Pothin and Paul Honeine and Abdelaziz Bensrhair",\n   title =  "Fast and Accurate Gaussian Pyramid Construction by Extended Box Filtering",\n   booktitle =  "Proc. 25rd European Conference on Signal Processing (EUSIPCO)",\n   address =  "Rome, Italy",\n   year  =  "2018",\n   month =  "3 - 7~" # sep,\n   pages    =  "400-404",\n   keywords =  "machine learning, computer vision",\n   acronym =  "EUSIPCO",\n   url_paper  =  "http://honeine.fr/paul/publi/18.eusipco.vision.pdf",\n   url_link= "https://ieeexplore.ieee.org/document/8553321",\n   doi = "10.23919/EUSIPCO.2018.8553321", \n   keywords={Convolution, Kernel, Two dimensional displays, Signal processing algorithms, Europe, Computer vision, Gaussian pyramid, extended box filters, computer vision, SIFT}, \n   abstract = "Gaussian Pyramid (GP) is one of the most important representations in computer vision. However, the computation of GP is still challenging for real-time applications. In this paper, we propose a novel approach by investigating the extended box filters for an efficient Gaussian approximation. Taking advantages of the cascade configuration, tiny kernels and memory cache, we develop a fast and suitable algorithm for embedded systems, typically smartphones. Experiments with Android NDK show a 5x speed up compared to an optimized CPU-version of the Gaussian smoothing."\n}\n
\n
\n\n\n
\n Gaussian Pyramid (GP) is one of the most important representations in computer vision. However, the computation of GP is still challenging for real-time applications. In this paper, we propose a novel approach by investigating the extended box filters for an efficient Gaussian approximation. Taking advantages of the cascade configuration, tiny kernels and memory cache, we develop a fast and suitable algorithm for embedded systems, typically smartphones. Experiments with Android NDK show a 5x speed up compared to an optimized CPU-version of the Gaussian smoothing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralized Sensor Localization by Decision Fusion of RSSI and Mobility in Indoor Environments.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 25rd European Conference on Signal Processing (EUSIPCO), pages 2300-2304, Rome, Italy, 3 - 7 September 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Decentralized link\n  \n \n \n \"Decentralized paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.eusipco.loc,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  "Decentralized Sensor Localization by Decision Fusion of RSSI and Mobility in Indoor Environments",\n   booktitle =  "Proc. 25rd European Conference on Signal Processing (EUSIPCO)",\n   address =  "Rome, Italy",\n   year  =  "2018",\n   month =  "3 - 7~" # sep,\n   pages    =  "2300-2304",\n   keywords =  "machine learning, wireless sensor networks",\n   acronym =  "EUSIPCO",\n   url_link= "https://ieeexplore.ieee.org/document/8553020",\n   url_paper  =  "http://honeine.fr/paul/publi/18.eusipco.loc.pdf",\n   abstract={Localization of sensors has become an essential issue in wireless networks. This paper presents a decentralized approach to localize sensors in indoor environments. The targeted area is partitioned into several sectors, each of which having a local calculator capable of emitting, receiving, and processing data. Each calculator runs a local localization algorithm, by investigating the belief functions theory for decision fusion of radio fingerprints, to estimate the sensors zones. The fusion of all calculators estimates, is combined with a mobility model to yield a final zone decision. The decentralized algorithm is described and evaluated against the state-of-the-art. Experimental results show the effectiveness of the proposed method in terms of localization accuracy, processing time, and robustness.}, \n   keywords={Calculators, Topology, Network topology, Signal processing algorithms, Wireless sensor networks, Wireless fidelity, Signal processing, Decentralized architecture, decision fusion, localization, mobility, RSSI fingerprints}, \n   doi={10.23919/EUSIPCO.2018.8553020}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n Localization of sensors has become an essential issue in wireless networks. This paper presents a decentralized approach to localize sensors in indoor environments. The targeted area is partitioned into several sectors, each of which having a local calculator capable of emitting, receiving, and processing data. Each calculator runs a local localization algorithm, by investigating the belief functions theory for decision fusion of radio fingerprints, to estimate the sensors zones. The fusion of all calculators estimates, is combined with a mobility model to yield a final zone decision. The decentralized algorithm is described and evaluated against the state-of-the-art. Experimental results show the effectiveness of the proposed method in terms of localization accuracy, processing time, and robustness.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Supervised Classification of Social Spammers using a Similarity-based Markov Random Field Approach.\n \n \n \n \n\n\n \n El-Mawass, N.; Honeine, P.; and Vercouter, L.\n\n\n \n\n\n\n In Proc. the 5th multidisciplinary international social networks conference, of MISNC '18, pages 14:1 - 14:8, New York, NY, USA, 16 - 18 July 2018. ACM\n \n\n\n\n
\n\n\n\n \n \n \"Supervised link\n  \n \n \n \"Supervised paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.misnc,\n   author =  "Nour El-Mawass and Paul Honeine and Laurent Vercouter",\n   title =  "Supervised Classification of Social Spammers using a Similarity-based Markov Random Field Approach",\n   booktitle =  "Proc. the 5th multidisciplinary international social networks conference",\n   series = {MISNC '18},\n   year = {2018},\n   isbn = {978-1-4503-6465-2},\n   location = {Saint-Etienne, France},\n   articleno = {14},\n   numpages = {8},\n   url_link= {http://doi.acm.org/10.1145/3227696.3227712},\n   doi = {10.1145/3227696.3227712},\n   acmid = {3227712},\n   publisher = {ACM},\n   address = {New York, NY, USA},\n   month =  "16 - 18~" # jul,\n   pages    =  {14:1 - 14:8},\n   url_paper  =  "http://honeine.fr/paul/publi/18.misnc.pdf",\n   keywords = {Cybersecurity, Markov Random Field, Online Social Networks, Social Spam detection, Supervised Learning, Twitter},\n   abstract = "Social spam has been plaguing online social networks for years. Being the sites where online users spend most of their time, the battle to capitalize and monetize users' attention is actively fought by both spammers and legitimate sites operators. Social spam detection systems have been proposed as early as 2010. They commonly exploit users' content and behavioral characteristics to build supervised classifiers. Yet spam is an evolving concept, and developed supervised classifiers often become obsolete with the spam community continuously trying to evade detection. In this paper, we use similarity between users to correct evasion-induced errors in the predictions of spam filters. Specifically, we link similar accounts based on their shared applications and build a Markov Random Field model on top of the resulting similarity graph. We use this graphical model in conjunction with traditional supervised classifiers and test the proposed model on a dataset that we recently collected from Twitter. Results show that the proposed model improves the accuracy of classical classifiers by increasing both the precision and the recall of state-of-the-art systems.",\n   acronym =  "MISNC",\n }%   address =  "Saint-Etienne, France",\n\n\n
\n
\n\n\n
\n Social spam has been plaguing online social networks for years. Being the sites where online users spend most of their time, the battle to capitalize and monetize users' attention is actively fought by both spammers and legitimate sites operators. Social spam detection systems have been proposed as early as 2010. They commonly exploit users' content and behavioral characteristics to build supervised classifiers. Yet spam is an evolving concept, and developed supervised classifiers often become obsolete with the spam community continuously trying to evade detection. In this paper, we use similarity between users to correct evasion-induced errors in the predictions of spam filters. Specifically, we link similar accounts based on their shared applications and build a Markov Random Field model on top of the resulting similarity graph. We use this graphical model in conjunction with traditional supervised classifiers and test the proposed model on a dataset that we recently collected from Twitter. Results show that the proposed model improves the accuracy of classical classifiers by increasing both the precision and the recall of state-of-the-art systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Weighted Kernel-based Hierarchical Classification Method for Zoning of Sensors in Indoor Wireless Networks.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 19th IEEE International Workshop on Signal Processing Advances in Wireless Communications, Kalamata, Greece, 25 - 28 June 2018. \n \n\n\n\n
\n\n\n\n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.spawc,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  "A Weighted Kernel-based Hierarchical Classification Method for Zoning of Sensors in Indoor Wireless Networks",\n   booktitle =  "Proc. 19th IEEE International Workshop on Signal Processing Advances in Wireless Communications",\n   address =  "Kalamata, Greece",\n   year  =  "2018",\n   month =  "25 - 28~" # jun,\n   acronym =  "SPAWC",\n   url_paper  =  "http://honeine.fr/paul/publi/18.spawc.pdf",\n   abstract={This paper presents a solution for localization of sensors by zoning, in indoor wireless networks. The problem is tackled by a classification technique, where the objective is to classify the zone of the mobile sensor for any observation. The method is hierarchical and uses the belief functions theory to assign confidence levels for zones. For this purpose, kernel density estimation is used first to model the features observations. The algorithm then uses hierarchical clustering and similarity divergence, creating a two-level hierarchy, to reduce the number of zones to be classified at a time. At each level of the hierarchy, a feature selection technique is carried to optimize the misclassification rate and feature redundancy. Experiments are realized in a wireless sensor network to evaluate the performance of the proposed method.}, \n   keywords={Kernel, Wireless sensor networks, Sensors, Wireless communication, Estimation, Feature extraction, Clustering algorithms, Belief functions, classification, feature selection, hierarchical clustering, kernel density estimation, zoning, machine learning, wireless sensor networks}, \n   doi={10.1109/SPAWC.2018.8445918}, \n   ISSN={1948-3252}, \n}\n\n\n
\n
\n\n\n
\n This paper presents a solution for localization of sensors by zoning, in indoor wireless networks. The problem is tackled by a classification technique, where the objective is to classify the zone of the mobile sensor for any observation. The method is hierarchical and uses the belief functions theory to assign confidence levels for zones. For this purpose, kernel density estimation is used first to model the features observations. The algorithm then uses hierarchical clustering and similarity divergence, creating a two-level hierarchy, to reduce the number of zones to be classified at a time. At each level of the hierarchy, a feature selection technique is carried to optimize the misclassification rate and feature redundancy. Experiments are realized in a wireless sensor network to evaluate the performance of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Neighbor Retrieval Visualizer for Monitoring Lifting Cranes.\n \n \n \n \n\n\n \n Honeine, P.; Mouzoun, S.; and Eltabach, M.\n\n\n \n\n\n\n In Rincon, A. F. D.; Rueda, F. V.; Chaari, F.; Zimroz, R.; and Haddar, M., editor(s), Advances in Condition Monitoring of Machinery in Non-Stationary Operations: Proc. 6th International Conference on Condition Monitoring of Machinery in Non-stationary Operations, of Applied Condition Monitoring, Santander, Spain, 20 - 22 June 2018. Springer International Publishing\n - Nominated for the price of best paper (Condition Monitoring Non-Stationary Operations) -\n\n\n\n
\n\n\n\n \n \n \"Neighbor link\n  \n \n \n \"Neighbor paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.cmmno,\n   author =  "Paul Honeine and  Samira Mouzoun and Mario Eltabach",\n   title =  "Neighbor Retrieval Visualizer for Monitoring Lifting Cranes",\n   booktitle =  "Advances in Condition Monitoring of Machinery in Non-Stationary Operations: Proc. 6th International Conference on Condition Monitoring of Machinery in Non-stationary Operations",\n   address =  "Santander, Spain",\n   note  =  "- Nominated for the price of best paper (Condition Monitoring Non-Stationary Operations) -",\n   editor = " Alfonso Fernandez Del Rincon and Fernando Viadero Rueda and Fakher Chaari and Radoslaw Zimroz and Mohamed Haddar",\n   year  =  "2018",\n   month =   "20 - 22~" # jun,\n   keywords  =  "machine learning, one-class",\n   acronym =  "CMMNO",\n   url_link= "https://www.barnesandnoble.com/w/advances-in-condition-monitoring-of-machinery-in-non-stationary-operations-alfonso-fernandez-del-rincon/1129971582",\n   url_paper  =  "http://honeine.fr/paul/publi/18.cmmno.pdf",\n   doi = {10.1007/978-3-030-11220-2_3},\n   publisher = "Springer International Publishing",\n   series = "Applied Condition Monitoring",\n   abstract = "Gear wear is hard to monitor in lifting cranes due to the difficulties to provide appropriate models of such complex systems with varying functioning modes. Statistical machine learning offers an elegant framework to circumvent these difficulties. This work explores recent advances in statistical machine learning to provide a data-driven model-free approach to monitor lifting cranes, by investigating a large number of indicators extracted from vibration signals. The principal contributions of this paper are twofold. Firstly, it explores the recently introduced Neighbor Retrieval Visualizer (NeRV) method for nonlinear information retrieval. The extracted information allows to construct a low-dimensional representation space that faithfully depicts the evolution of the system. Secondly, it proposes a simple and efficient detection method to detect abnormal evolution and abrupt changes of the system at hand, using the distance measure with neighborhood retrieval in the same spirit as NeRV. The relevance of the proposed methods, for visualizing the evolution and detecting abnormality, is demonstrated with experiments conducted on real data acquired on a lifting crane benchmark operating for almost two years with more than fifty indicators extracted from vibration signals.",\n}\n\n
\n
\n\n\n
\n Gear wear is hard to monitor in lifting cranes due to the difficulties to provide appropriate models of such complex systems with varying functioning modes. Statistical machine learning offers an elegant framework to circumvent these difficulties. This work explores recent advances in statistical machine learning to provide a data-driven model-free approach to monitor lifting cranes, by investigating a large number of indicators extracted from vibration signals. The principal contributions of this paper are twofold. Firstly, it explores the recently introduced Neighbor Retrieval Visualizer (NeRV) method for nonlinear information retrieval. The extracted information allows to construct a low-dimensional representation space that faithfully depicts the evolution of the system. Secondly, it proposes a simple and efficient detection method to detect abnormal evolution and abrupt changes of the system at hand, using the distance measure with neighborhood retrieval in the same spirit as NeRV. The relevance of the proposed methods, for visualizing the evolution and detecting abnormality, is demonstrated with experiments conducted on real data acquired on a lifting crane benchmark operating for almost two years with more than fifty indicators extracted from vibration signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mobility-based Tracking Using WiFi RSS in Indoor Wireless Sensor Networks.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 9th IFIP International Conference on New Technologies, Mobility and Security, Paris, France, 26 - 28 February 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Mobility-based paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.ntms.mobility,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  "Mobility-based Tracking Using {WiFi} {RSS} in Indoor Wireless Sensor Networks",\n   booktitle =  "Proc. 9th IFIP International Conference on New Technologies, Mobility and Security",\n   address =  "Paris, France",\n   year  =  "2018",\n   month =    "26 - 28~" # feb,\n   acronym =  "NTMS",\n   url_paper  =  "http://honeine.fr/paul/publi/18.ntms.mobility.pdf",\n   abstract={Tracking of mobile sensors is an important research issue in wireless sensor networks. This paper presents a zoning-based tracking technique that works efficiently in indoor environments. The targeted area is composed of several zones, the objective being to determine the zone of the mobile sensor in a real-time tracking process. The proposed method creates a belief functions framework that combines evidence using the sensors mobility and observations. To do this, a mobility model is proposed by using the previous state of the sensor and its assumed maximum speed. Also, an observation model is constructed based on fingerprints collected as WiFi signals strengths received from surrounding Access Points. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.}, \n   keywords={indoor radio, mobility management (mobile radio), RSSI, target tracking, wireless LAN, wireless sensor networks, indoor environments, mobile sensor, real-time tracking process, mobility model, WiFi signals strengths, WiFi RSS, indoor wireless sensor networks, zoning-based tracking technique, access points, mobility-based tracking, Sensors, Wireless fidelity, Indoor environments, Data models, Wireless sensor networks, Real-time systems, Computational modeling, Belief functions, evidence fusion, mobility, tracking, WiFi signals}, \n   doi={10.1109/NTMS.2018.8328704}, \n   ISSN={2157-4960}, \n}%- Mobility & Wireless Networks Track paper \n\n
\n
\n\n\n
\n Tracking of mobile sensors is an important research issue in wireless sensor networks. This paper presents a zoning-based tracking technique that works efficiently in indoor environments. The targeted area is composed of several zones, the objective being to determine the zone of the mobile sensor in a real-time tracking process. The proposed method creates a belief functions framework that combines evidence using the sensors mobility and observations. To do this, a mobility model is proposed by using the previous state of the sensor and its assumed maximum speed. Also, an observation model is constructed based on fingerprints collected as WiFi signals strengths received from surrounding Access Points. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Localization of Sensors in Indoor Wireless Networks: An Observation Model Using WiFi RSS.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 9th IFIP International Conference on New Technologies, Mobility and Security - Workshop on Wireless Sensor Networks: Architectures, Deployments, and Trends, Paris, France, 26 - 28 February 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Localization paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.ntms.localization,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  "Localization of Sensors in Indoor Wireless Networks: An Observation Model Using {WiFi} {RSS}",\n   booktitle =  "Proc. 9th IFIP International Conference on New Technologies, Mobility and Security - Workshop on Wireless Sensor Networks: Architectures, Deployments, and Trends",\n   address =  "Paris, France",\n   year  =  "2018",\n   month = "26 - 28~" # feb,\n   acronym =  "NTMS",\n   url_paper  =  "http://honeine.fr/paul/publi/18.ntms.localization.pdf",\n   abstract={Indoor localization has become an important issue for wireless sensor networks. This paper presents a zoning-based localization technique that works efficiently in indoor environments. The targeted area is composed of several zones, the objective being to determine the zone of the sensor using an observation model. The observation model is constructed based on fingerprints collected as WiFi signals strengths received from surrounding Access Points. The method creates a belief functions framework that uses all available information to assign evidence to each zone. A hierarchical clustering technique is then applied to create a two-level hierarchy composed of clusters and of original zones in each cluster. At each level of the hierarchy, an Access Point selection approach is proposed to choose the best subset of Access Points in terms of discriminative capacity and redundancy. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.}, \n   keywords={indoor radio, pattern clustering, radionavigation, RSSI, wireless LAN, wireless sensor networks, indoor localization, wireless sensor networks, indoor environments, observation model, WiFi signals strengths, Access Points, hierarchical clustering technique, Access Point selection approach, indoor wireless networks, WiFi RSS, zoning-based localization technique, belief functions framework, two-level hierarchy, Wireless fidelity, Wireless sensor networks, Sensors, Redundancy, Wireless communication, Databases, Statistical distributions, Access point selection, belief functions, hierarchicalclustering, localization, observation model, WiFi signals}, \n   doi={10.1109/NTMS.2018.8328699}, \n   ISSN={2157-4960}, \n}\n\n\n
\n
\n\n\n
\n Indoor localization has become an important issue for wireless sensor networks. This paper presents a zoning-based localization technique that works efficiently in indoor environments. The targeted area is composed of several zones, the objective being to determine the zone of the sensor using an observation model. The observation model is constructed based on fingerprints collected as WiFi signals strengths received from surrounding Access Points. The method creates a belief functions framework that uses all available information to assign evidence to each zone. A hierarchical clustering technique is then applied to create a two-level hierarchy composed of clusters and of original zones in each cluster. At each level of the hierarchy, an Access Point selection approach is proposed to choose the best subset of Access Points in terms of discriminative capacity and redundancy. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Graph Kernels based on Linear Patterns: Theoretical and Experimental Comparisons.\n \n \n \n\n\n \n Jia, L.; Gaüzère, B.; and Honeine, P.\n\n\n \n\n\n\n In Poster presented at the Machine Learning Summer School, Universidad Autonoma de Madrid, Madrid, Spain, 27 August–7 September 2018. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.mlss,\n   author =  "Linlin Jia and Benoît Gaüzère and Paul Honeine",\n   title =  "Graph Kernels based on Linear Patterns: Theoretical and Experimental Comparisons",\n   booktitle =  "Poster presented at the Machine Learning Summer School, Universidad Autonoma de Madrid",\n   address =  "Madrid, Spain",\n   year =  "2018",\n   month =  "27~" # aug # "--" # "7~" # sep,\n   keywords =  "machine learning, graphs, graph data",\n   acronym =  "MLSS",\n   abstract = "Graph kernel is a powerful tool to bridge the gap between machine learning and data encoded as graphs. Most graph kernels are based on a decomposition of graphs into a set of patterns. The similarity between graphs is then deduced from the similarity of corresponding patterns. Among different possible sets of patterns, linear patterns based kernels often constitute a good trade off between time consumption and accuracy performance. In this work, we propose a thorough study and comparison of the existing graph kernels based on different linear patterns, namely walks and paths. This work leads to a clear comparison of pros and cons of different proposed kernels. First, all graph kernels are studied in detail, including their mathematical foundation, structures of patterns and time complexity. Relationships among these kernels are studied with respect to their development history and mathematical representations. Then, experiments are performed on various datasets exhibiting different kinds of graphs, including labeled and unlabeled graphs, graphs with different numbers of nodes, graphs with different average degrees, cyclic and acyclic graphs, planar and non-planar graphs. Finally, performance and time complexity of kernels are compared and analyzed on these graphs, and suggestions are proposed to choose kernels according to the type of graph data. An open source python library containing an implementation of all discussed kernels is publicly available on Github to the community, so as to promote and facilitate the use of graph kernels in machine learning problems.",\n}%misc{\n\n\n\n
\n
\n\n\n
\n Graph kernel is a powerful tool to bridge the gap between machine learning and data encoded as graphs. Most graph kernels are based on a decomposition of graphs into a set of patterns. The similarity between graphs is then deduced from the similarity of corresponding patterns. Among different possible sets of patterns, linear patterns based kernels often constitute a good trade off between time consumption and accuracy performance. In this work, we propose a thorough study and comparison of the existing graph kernels based on different linear patterns, namely walks and paths. This work leads to a clear comparison of pros and cons of different proposed kernels. First, all graph kernels are studied in detail, including their mathematical foundation, structures of patterns and time complexity. Relationships among these kernels are studied with respect to their development history and mathematical representations. Then, experiments are performed on various datasets exhibiting different kinds of graphs, including labeled and unlabeled graphs, graphs with different numbers of nodes, graphs with different average degrees, cyclic and acyclic graphs, planar and non-planar graphs. Finally, performance and time complexity of kernels are compared and analyzed on these graphs, and suggestions are proposed to choose kernels according to the type of graph data. An open source python library containing an implementation of all discussed kernels is publicly available on Github to the community, so as to promote and facilitate the use of graph kernels in machine learning problems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Champ Aléatoire de Markov pour la Détection Supervisée des Comptes Malicieux sur Twitter.\n \n \n \n \n\n\n \n El-Mawass, N.; Honeine, P.; and Vercouter, L.\n\n\n \n\n\n\n In 20-ème Conférence d'Apprentissage automatique (CAp) - 20th annual meeting of the francophone Machine Learning community, Rouen, France, 20 - 22 June 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Champ paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{18.cap,\n   author =  "Nour El-Mawass and Paul Honeine and Laurent Vercouter",\n   title =  "Champ Aléatoire de Markov pour la Détection Supervisée des Comptes Malicieux sur Twitter",\n   booktitle =  "20-ème Conférence d'Apprentissage automatique (CAp) - 20th annual meeting of the francophone Machine Learning community",\n   address =  "Rouen, France",\n   year =  "2018",\n   month =  "20 - 22~" # jun,\n   keywords =  "machine learning, Markov random field, social networks, cybersecurity, graphs, graph data",\n   acronym =  "CAp",\n   url_paper  =  "http://honeine.fr/paul/publi/18.cap.pdf",\n   abstract = "Malicious use of online social networks (OSNs) has a detrimental effect on these platforms' security, usefulness, profitability and information veracity. The evolving nature of the spam phenomenon, causes the many proposed supervised classifiers to become obsolete. The present work models the spam detection problem as a classification problem where the goal is to assign a label (legitimate vs. malicious) to a given social account on Twitter. We propose to solve the problem by exploiting the similarity between social accounts and performing graphical inference over similar accounts.",\n}\n\n
\n
\n\n\n
\n Malicious use of online social networks (OSNs) has a detrimental effect on these platforms' security, usefulness, profitability and information veracity. The evolving nature of the spam phenomenon, causes the many proposed supervised classifiers to become obsolete. The present work models the spam detection problem as a classification problem where the goal is to assign a label (legitimate vs. malicious) to a given social account on Twitter. We propose to solve the problem by exploiting the similarity between social accounts and performing graphical inference over similar accounts.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2017\n \n \n (6)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Correntropy Maximization via ADMM - Application to Robust Hyperspectral Unmixing.\n \n \n \n \n\n\n \n Zhu, F.; Halimi, A.; Honeine, P.; Chen, B.; and Zheng, N.\n\n\n \n\n\n\n IEEE Transactions on Geoscience and Remote Sensing, 55(9): 1-12. September 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Correntropy link\n  \n \n \n \"Correntropy paper\n  \n \n \n \"Correntropy code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{17.Correntropy_ADMM,\n   author =  "Fei Zhu and Abderrahim Halimi and Paul Honeine and Badong Chen and Nanning Zheng",\n   title =  {Correntropy Maximization via ADMM - Application to Robust Hyperspectral Unmixing},\n   journal =  "IEEE Transactions on Geoscience and Remote Sensing",\n   year  =  "2017",\n   volume =  "55",\n   number =  "9",\n   pages =  "1-12",\n   month =  sep,\n   url_link =  "http://ieeexplore.ieee.org/document/7964753/",\n   doi  = "10.1109/TGRS.2017.2696262",\n   url_paper   =  "http://honeine.fr/paul/publi/17.corr_admm.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/17.Correntropy_ADMM.rar",\n   keywords =  "machine learning, hyperspectral, geophysical image processing, hyperspectral imaging, learning (artificial intelligence), maximum entropy methods, remote sensing, correntropy maximization, ADMM, alternating direction method of multipliers, hyperspectral images, robust supervised spectral unmixing, fully constrained unmixing, remote sensing, cuprite mining image, hyperspectral imaging, optimization, robustness, kernel, noise measurement, convex functions, alternating direction method of multipliers (ADMM), correntropy, hyperspectral image, maximum correntropy estimation, unmixing problem",\n   abstract = "In hyperspectral images, some spectral bands suffer from low signal-to-noise ratio due to noisy acquisition and atmospheric effects, thus requiring robust techniques for the unmixing problem. This paper presents a robust supervised spectral unmixing approach for hyperspectral images. The robustness is achieved by writing the unmixing problem as the maximization of the correntropy criterion subject to the most commonly used constraints. Two unmixing problems are derived: the first problem considers the fully constrained unmixing, with both the nonnegativity and sum-to-one constraints, while the second one deals with the nonnegativity and the sparsity promoting of the abundances. The corresponding optimization problems are solved using an alternating direction method of multipliers (ADMM) approach. Experiments on synthetic and real hyperspectral images validate the performance of the proposed algorithms for different scenarios, demonstrating that the correntropy-based unmixing with ADMM is particularly robust against highly noisy outlier bands.",\n}\n\n
\n
\n\n\n
\n In hyperspectral images, some spectral bands suffer from low signal-to-noise ratio due to noisy acquisition and atmospheric effects, thus requiring robust techniques for the unmixing problem. This paper presents a robust supervised spectral unmixing approach for hyperspectral images. The robustness is achieved by writing the unmixing problem as the maximization of the correntropy criterion subject to the most commonly used constraints. Two unmixing problems are derived: the first problem considers the fully constrained unmixing, with both the nonnegativity and sum-to-one constraints, while the second one deals with the nonnegativity and the sparsity promoting of the abundances. The corresponding optimization problems are solved using an alternating direction method of multipliers (ADMM) approach. Experiments on synthetic and real hyperspectral images validate the performance of the proposed algorithms for different scenarios, demonstrating that the correntropy-based unmixing with ADMM is particularly robust against highly noisy outlier bands.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Maximum Correntropy Unscented Filter.\n \n \n \n \n\n\n \n Liu, X.; Chen, B.; Xu, B.; Wu, Z.; and Honeine, P.\n\n\n \n\n\n\n International Journal of Systems Science, 48(8): 1607-1615. 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Maximum paper\n  \n \n \n \"Maximum link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{17.corr_unscented,\n   author =  "Xi Liu and Badong Chen and Bin Xu and Zongze Wu and Paul Honeine",\n   title =  "Maximum Correntropy Unscented Filter",\n   journal =  "International Journal of Systems Science",\n   year =  "2017",\n   volume =  "48",\n   number =  "8",\n   pages =  "1607-1615",\n   url_paper   =  "http://honeine.fr/paul/publi/17.corr_unscented.pdf",\n   url_link= "http://dx.doi.org/10.1080/00207721.2016.1277407",\n   doi = "10.1080/00207721.2016.1277407",\n   keywords =  "machine learning, unscented Kalman filter (UKF), unscented transformation (UT), maximum correntropy criterion (MCC)",\n   abstract = "The unscented transformation (UT) is an efficient method to solve the state estimation problem for a non-linear dynamic system, utilising a derivative-free higher-order approximation by approximating a Gaussian distribution rather than approximating a non-linear function. Applying the UT to a Kalman filter type estimator leads to the well-known unscented Kalman filter (UKF). Although the UKF works very well in Gaussian noises, its performance may deteriorate significantly when the noises are non-Gaussian, especially when the system is disturbed by some heavy-tailed impulsive noises. To improve the robustness of the UKF against impulsive noises, a new filter for non-linear systems is proposed in this work, namely the maximum correntropy unscented filter (MCUF). In MCUF, the UT is applied to obtain the prior estimates of the state and covariance matrix, and a robust statistical linearisation regression based on the maximum correntropy criterion is then used to obtain the posterior estimates of the state and covariance matrix. The satisfying performance of the new algorithm is confirmed by two illustrative examples.",\n}\n\n\n
\n
\n\n\n
\n The unscented transformation (UT) is an efficient method to solve the state estimation problem for a non-linear dynamic system, utilising a derivative-free higher-order approximation by approximating a Gaussian distribution rather than approximating a non-linear function. Applying the UT to a Kalman filter type estimator leads to the well-known unscented Kalman filter (UKF). Although the UKF works very well in Gaussian noises, its performance may deteriorate significantly when the noises are non-Gaussian, especially when the system is disturbed by some heavy-tailed impulsive noises. To improve the robustness of the UKF against impulsive noises, a new filter for non-linear systems is proposed in this work, namely the maximum correntropy unscented filter (MCUF). In MCUF, the UT is applied to obtain the prior estimates of the state and covariance matrix, and a robust statistical linearisation regression based on the maximum correntropy criterion is then used to obtain the posterior estimates of the state and covariance matrix. The satisfying performance of the new algorithm is confirmed by two illustrative examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Denoising Smooth Signals Using a Bayesian Approach: Application to Altimetry.\n \n \n \n \n\n\n \n Halimi, A.; Buller, G. S.; McLaughlin, S.; and Honeine, P.\n\n\n \n\n\n\n IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10(4): 1278 - 1289. April 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Denoising link\n  \n \n \n \"Denoising paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{17.altimetry,\n   author =  "Abderrahim Halimi and Gerald S. Buller and Steve McLaughlin and Paul Honeine",\n   title =  "Denoising Smooth Signals Using a Bayesian Approach: Application to Altimetry",\n   journal =  "IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing",\n   year  =  "2017",\n   volume =  "10",\n   number =  "4",\n   pages =  "1278 - 1289",\n   month =  apr,\n   url_link= "http://ieeexplore.ieee.org/document/7820100/",\n   doi="10.1109/JSTARS.2016.2629516", \n   url_paper  =  "http://honeine.fr/paul/publi/17.altimetry.pdf",\n   keywords =  "Bayesian inference, Bayes methods, Gaussian noise, height measurement, Markov processes, parameter estimation, random processes, signal denoising, smoothing methods, statistical distributions, denoising smooth signals, Bayesian approach, Bayesian strategy, smooth signals estimation, smooth evolution, continuous signals, noise Gaussian properties, successive signals, gamma Markov random field, signal energies, noise variances, posterior distribution, fast coordinate descent algorithm, satellite altimetric data, synthetic signal, real signal, state-of-the-art algorithms, denoising quality, computational cost, altimetric parameter quality, parameter estimation, classification strategy, Bayes methods, Logic gates, Noise reduction, Signal processing algorithms, Computational modeling, Correlation, Satellites, Altimetry, Bayesian algorithm, coordinate descent algorithm (CDA), gamma Markov random fields (gamma-MRFs)", \n   abstract = "This paper presents a novel Bayesian strategy for the estimation of smooth signals corrupted by Gaussian noise. The method assumes a smooth evolution of a succession of continuous signals that can have a numerical or an analytical expression with respect to some parameters. The proposed Bayesian model takes into account the Gaussian properties of the noise and the smooth evolution of the successive signals. In addition, a gamma Markov random field prior is assigned to the signal energies and to the noise variances to account for their known properties. The resulting posterior distribution is maximized using a fast coordinate descent algorithm whose parameters are updated by analytical expressions. The proposed algorithm is tested on satellite altimetric data demonstrating good denoising results on both synthetic and real signals. In comparison with state-of-the-art algorithms, the proposed strategy provides a good compromise between denoising quality and necessary reduced computational cost. The proposed algorithm is also shown to improve the quality of the altimetric parameters when combined with a parameter estimation or a classification strategy.",\n}\n
\n
\n\n\n
\n This paper presents a novel Bayesian strategy for the estimation of smooth signals corrupted by Gaussian noise. The method assumes a smooth evolution of a succession of continuous signals that can have a numerical or an analytical expression with respect to some parameters. The proposed Bayesian model takes into account the Gaussian properties of the noise and the smooth evolution of the successive signals. In addition, a gamma Markov random field prior is assigned to the signal energies and to the noise variances to account for their known properties. The resulting posterior distribution is maximized using a fast coordinate descent algorithm whose parameters are updated by analytical expressions. The proposed algorithm is tested on satellite altimetric data demonstrating good denoising results on both synthetic and real signals. In comparison with state-of-the-art algorithms, the proposed strategy provides a good compromise between denoising quality and necessary reduced computational cost. The proposed algorithm is also shown to improve the quality of the altimetric parameters when combined with a parameter estimation or a classification strategy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online Kernel Nonnegative Matrix Factorization.\n \n \n \n \n\n\n \n Zhu, F.; and Honeine, P.\n\n\n \n\n\n\n Signal Processing, 131: 143 - 153. February 2017.\n \n\n\n\n
\n\n\n\n \n \n \"Online link\n  \n \n \n \"Online paper\n  \n \n \n \"Online code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{17.oknmf,\n   author =  "Fei Zhu and Paul Honeine",\n   title =  "Online Kernel Nonnegative Matrix Factorization",\n   journal =  "Signal Processing",\n   year  =  "2017",\n   volume =  "131",\n   pages =  "143 - 153",\n   month =  feb,\n   issn = "0165-1684",\n   doi = "10.1016/j.sigpro.2016.08.011",\n   url_link= "http://www.sciencedirect.com/science/article/pii/S0165168416301979",\n   url_paper =  "http://honeine.fr/paul/publi/17.okNMF.pdf",\n   url_code =  "http://www.honeine.fr/paul/publi/17.oknmf.zip",\n   keywords  =  "machine learning, hyperspectral, nonnegative matrix factorization, online learning kernel machines hyperspectral unmixing",\n   abstract = "Nonnegative matrix factorization (NMF) has become a prominent signal processing and data analysis technique. To address streaming data, online methods for NMF have been introduced recently, mainly restricted to the linear model. In this paper, we propose a framework for online nonlinear NMF, where the factorization is conducted in a kernel-induced feature space. By exploring recent advances in the stochastic gradient descent and the mini-batch strategies, the proposed algorithms have a controlled computational complexity. We derive several general update rules, in additive and multiplicative strategies, and detail the case of the Gaussian kernel. The performance of the proposed framework is validated on unmixing synthetic and real hyperspectral images, comparing to state-of-the-art techniques.",\n}\n\n
\n
\n\n\n
\n Nonnegative matrix factorization (NMF) has become a prominent signal processing and data analysis technique. To address streaming data, online methods for NMF have been introduced recently, mainly restricted to the linear model. In this paper, we propose a framework for online nonlinear NMF, where the factorization is conducted in a kernel-induced feature space. By exploring recent advances in the stochastic gradient descent and the mini-batch strategies, the proposed algorithms have a controlled computational complexity. We derive several general update rules, in additive and multiplicative strategies, and detail the case of the Gaussian kernel. The performance of the proposed framework is validated on unmixing synthetic and real hyperspectral images, comparing to state-of-the-art techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Une véritable approche $\\ell_0$ pour l'apprentissage de dictionnaire.\n \n \n \n \n\n\n \n Liu, Y.; Canu, S.; Honeine, P.; and Ruan, S.\n\n\n \n\n\n\n In Actes du 26-ème Colloque GRETSI sur le Traitement du Signal et des Images, Juan-Les-Pins, France, 5 - 6 September 2017. \n \n\n\n\n
\n\n\n\n \n \n \"Une paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{17.gretsi.dictionnaire,\n   author =  "Yuan Liu and Stéphane Canu and Paul Honeine and Su Ruan",\n   title =  "Une véritable approche $\\ell_0$ pour l'apprentissage de dictionnaire",\n   booktitle =  "Actes du 26-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Juan-Les-Pins, France",\n   year  =  "2017",\n   month =  "5 - 6~" # sep,\n   keywords  =  "machine learning, sparsity",\n   acronym =  "GRETSI'17",\n   url_paper  =  "http://honeine.fr/paul/publi/17.gretsi.dictionnaire.pdf",\n   abstract = "Sparse representation learning has recently gained a great success in signal and image processing, thanks to recent advances in dictionary learning. To this end, the L0-norm is often used to control the sparsity level. Nevertheless, optimization problems based on the L0-norm are non-convex and NP-hard. For these reasons, relaxation techniques have been attracting much attention of researchers, by priorly targeting approximation solutions (e.g. L1-norm, pursuit strategies). On the contrary, this paper considers the exact L0-norm optimization problem and proves that it can be solved effectively, despite of its complexity. The proposed method reformulates the problem as a Mixed-Integer Quadratic Program (MIQP) and gets the global optimal solution by applying existing optimization software. Because the main difficulty of this approach is its computational time, two techniques are introduced that improve the computational speed. Finally, our method is applied to image denoising which shows its feasibility and relevance compared to the state-of-the-art.",\n}\n\n
\n
\n\n\n
\n Sparse representation learning has recently gained a great success in signal and image processing, thanks to recent advances in dictionary learning. To this end, the L0-norm is often used to control the sparsity level. Nevertheless, optimization problems based on the L0-norm are non-convex and NP-hard. For these reasons, relaxation techniques have been attracting much attention of researchers, by priorly targeting approximation solutions (e.g. L1-norm, pursuit strategies). On the contrary, this paper considers the exact L0-norm optimization problem and proves that it can be solved effectively, despite of its complexity. The proposed method reformulates the problem as a Mixed-Integer Quadratic Program (MIQP) and gets the global optimal solution by applying existing optimization software. Because the main difficulty of this approach is its computational time, two techniques are introduced that improve the computational speed. Finally, our method is applied to image denoising which shows its feasibility and relevance compared to the state-of-the-art.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification paramétrique multi-classes à croyance.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In Actes du 26-ème Colloque GRETSI sur le Traitement du Signal et des Images, Juan-Les-Pins, France, 5 - 6 September 2017. \n \n\n\n\n
\n\n\n\n \n \n \"Classification paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{17.gretsi.croyance,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  "Classification paramétrique multi-classes à croyance",\n   booktitle =  "Actes du 26-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Juan-Les-Pins, France",\n   year  =  "2017",\n   month =   "5 - 6~" # sep,\n   keywords  =  "machine learning, wireless sensor networks",\n   acronym =  "GRETSI'17",\n   url_paper  =  "http://honeine.fr/paul/publi/17.gretsi.croyance.pdf",\n   abstract = " The aim of parametric classification is to predict the target class of a new sample, under the hypothesis of known fitted distribution. A major drawback of this approach is the uncertainty due to the imprecise modeling of the training samples. For this purpose, a belief functions framework is provided to take into account uncertainties. The proposed method investigates the belief functions theory to assign a confidence weight to each class for any new sample. This approach yields a confidence-weighted parametric classification method for multi-class problems. The performance of the proposed method is validated by experiments on real data for indoor localization and for facial image recognition.",\n}\n\n\n
\n
\n\n\n
\n The aim of parametric classification is to predict the target class of a new sample, under the hypothesis of known fitted distribution. A major drawback of this approach is the uncertainty due to the imprecise modeling of the training samples. For this purpose, a belief functions framework is provided to take into account uncertainties. The proposed method investigates the belief functions theory to assign a confidence weight to each class for any new sample. This approach yields a confidence-weighted parametric classification method for multi-class problems. The performance of the proposed method is validated by experiments on real data for indoor localization and for facial image recognition.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2016\n \n \n (16)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n The Role of One-Class Classification in Detecting Cyberattacks in Critical Infrastructures.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n In Panayiotou, C. G.; Ellinas, G.; Kyriakides, E.; and Polycarpou, M. M., editor(s), Critical Information Infrastructures Security, 25, pages 244 - 255. Springer, February 2016.\n \n\n\n\n
\n\n\n\n \n \n \"The link\n  \n \n \n \"The paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{16.critis.chapter,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "The Role of One-Class Classification in Detecting Cyberattacks in Critical Infrastructures",\n   booktitle =  "Critical Information Infrastructures Security",\n   editor={Christos G. Panayiotou and Georgios Ellinas and Elias Kyriakides and Marios M. Polycarpou},\n   Publisher = {Springer},\n   chapter = "25",\n   isbn =  "978-3-319-31663-5",\n   Pages = "244 - 255",\n   year  =  "2016",\n   month = feb,\n   url_link={http://www.springer.com/us/book/9783319316635},\n   url_paper  =  "http://honeine.fr/paul/publi/16.critis.chapter.pdf",\n   keywords  =  "machine learning, one-class, cybersecurity, critical infrastructures, intrusion detection, one-class classification, SCADA systems",\n   acronym =  "CRITIS",\n   abstract = "The security of critical infrastructures has gained a lot of attention in the past few years with the growth of cyberthreats and the diversity of cyberattacks. Although traditional IDS update frequently their databases of known attacks, new complex attacks are generated everyday to circumvent security systems and to make their detection nearly impossible. This paper outlines the importance of one-class classification algorithms in detecting malicious cyberattacks in critical infrastructures. The role of machine learning algorithms is complementary to IDS and firewalls, and the objective of this work is to detect intentional intrusions once they have already bypassed these security systems. Two approaches are investigated, Support Vector Data Description and Kernel Principal Component Analysis. The impact of the metric in kernels is investigated, and a heuristic for choosing the bandwidth parameter is proposed. Tests are conducted on real data with several types of cyberattacks.",\n}\n
\n
\n\n\n
\n The security of critical infrastructures has gained a lot of attention in the past few years with the growth of cyberthreats and the diversity of cyberattacks. Although traditional IDS update frequently their databases of known attacks, new complex attacks are generated everyday to circumvent security systems and to make their detection nearly impossible. This paper outlines the importance of one-class classification algorithms in detecting malicious cyberattacks in critical infrastructures. The role of machine learning algorithms is complementary to IDS and firewalls, and the objective of this work is to detect intentional intrusions once they have already bypassed these security systems. Two approaches are investigated, Support Vector Data Description and Kernel Principal Component Analysis. The impact of the metric in kernels is investigated, and a heuristic for choosing the bandwidth parameter is proposed. Tests are conducted on real data with several types of cyberattacks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral Unmixing in Presence of Endmember Variability, Nonlinearity or Mismodelling Effects.\n \n \n \n \n\n\n \n Halimi, A.; Honeine, P.; and Bioucas-Dias, J.\n\n\n \n\n\n\n IEEE Transactions on Image Processing, 25(10): 4565 - 4579. October 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Hyperspectral link\n  \n \n \n \"Hyperspectral paper\n  \n \n \n \"Hyperspectral code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{16.variability,\n   author =  "Abderrahim Halimi and Paul Honeine and José Bioucas-Dias",\n   title =  "Hyperspectral Unmixing in Presence of Endmember Variability, Nonlinearity or Mismodelling Effects",\n   journal =  "IEEE Transactions on Image Processing",\n   year  =  "2016",\n   volume =  "25",\n   number =  "10",\n   pages =  "4565 - 4579",\n   month =  oct,\n   doi = "10.1109/TIP.2016.2590324",\n   url_link= "https://ieeexplore.ieee.org/document/7508937",\n   url_paper   =  "http://www.honeine.fr/paul/publi/16.variability.pdf",   \n   url_code  =  "http://www.honeine.fr/paul/publi/16.variability.zip",\n   keywords  =  "Bayesian inference, hyperspectral, Hyperspectral imagery, endmember variability, nonlinear spectral unmixing, robust unmixing, mismodelling effect, Bayesian estimation, coordinate descent algorithm, Gaussian process, Gamma Markov random field, Hyperspectral imaging, Computational modeling, Bayes methods, Adaptation models, Inference algorithms, Signal processing algorithms, Covariance matrices",\n   abstract = "This paper presents three hyperspectral mixture models jointly with Bayesian algorithms for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed general formulation assumes the linear model to be corrupted by an additive term whose expression can be adapted to account for nonlinearities (NLs), endmember variability (EV), or mismodeling effects (MEs). The NL effect is introduced by considering a polynomial expression that is related to bilinear models. The proposed new formulation of EV accounts for shape and scale endmember changes while enforcing a smooth spectral/spatial variation. The ME formulation considers the effect of outliers and copes with some types of EV and NL. The known constraints on the parameter of each observation model are modeled via suitable priors. The posterior distribution associated with each Bayesian model is optimized using a coordinate descent algorithm, which allows the computation of the maximum a posteriori estimator of the unknown model parameters. The proposed mixture and Bayesian models and their estimation algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity, when compared with the state-of-the-art algorithms.",\n}\n\n
\n
\n\n\n
\n This paper presents three hyperspectral mixture models jointly with Bayesian algorithms for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed general formulation assumes the linear model to be corrupted by an additive term whose expression can be adapted to account for nonlinearities (NLs), endmember variability (EV), or mismodeling effects (MEs). The NL effect is introduced by considering a polynomial expression that is related to bilinear models. The proposed new formulation of EV accounts for shape and scale endmember changes while enforcing a smooth spectral/spatial variation. The ME formulation considers the effect of outliers and copes with some types of EV and NL. The known constraints on the parameter of each observation model are modeled via suitable priors. The posterior distribution associated with each Bayesian model is optimized using a coordinate descent algorithm, which allows the computation of the maximum a posteriori estimator of the unknown model parameters. The proposed mixture and Bayesian models and their estimation algorithms are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity, when compared with the state-of-the-art algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bi-objective Nonnegative Matrix Factorization: Linear Versus Kernel-based Models.\n \n \n \n \n\n\n \n Zhu, F.; and Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Geoscience and Remote Sensing, 54(7): 4012 - 4022. July 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Bi-objective link\n  \n \n \n \"Bi-objective paper\n  \n \n \n \"Bi-objective code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{16.tgrs.nmf,\n   author =  "Fei Zhu and Paul Honeine",\n   title =  "Bi-objective Nonnegative Matrix Factorization: Linear Versus Kernel-based Models",\n   journal =  "IEEE Transactions on Geoscience and Remote Sensing",\n   year  =  "2016",\n   volume =  "54",\n   number =  "7",\n   pages =  "4012 - 4022",\n   month =  jul,\n   doi = "10.1109/TGRS.2016.2535298",\n   url_link= "https://ieeexplore.ieee.org/document/7448928",\n   url_paper  =  "http://honeine.fr/paul/publi/16.biobjectiveNMF.pdf",\n   url_code =  "http://www.honeine.fr/paul/publi/16.biobjectiveNMF.rar",\n   keywords =  "machine learning, hyperspectral, hyperspectral image, kernel machines, nonnegative matrix factorization (NMF), Pareto optimal, unmixing problem, linear programming, Hyperspectral imaging, Pareto optimization, matrix decomposition, approximation theory, feature extraction, hyperspectral imaging, Pareto optimisation, Pareto front approximation, multiobjective optimization, hyperspectral image processing, feature extraction technique, biobjective NMF, biobjective nonnegative matrix factorization",\n   abstract = "Nonnegative matrix factorization (NMF) is a powerful class of feature extraction techniques that has been successfully applied in many fields, particularly in signal and image processing. Current NMF techniques have been limited to a single-objective optimization problem, in either its linear or nonlinear kernel-based formulation. In this paper, we propose to revisit the NMF as a multiobjective problem, particularly a biobjective one, where the objective functions defined in both input and feature spaces are taken into account. By taking the advantage of the sum-weighted method from the literature of multiobjective optimization, the proposed biobjective NMF determines a set of nondominated, Pareto optimal, solutions. Moreover, the corresponding Pareto front is approximated and studied. Experimental results on unmixing synthetic and real hyperspectral images confirm the efficiency of the proposed biobjective NMF compared with the state-of-the-art methods.",\n}\n
\n
\n\n\n
\n Nonnegative matrix factorization (NMF) is a powerful class of feature extraction techniques that has been successfully applied in many fields, particularly in signal and image processing. Current NMF techniques have been limited to a single-objective optimization problem, in either its linear or nonlinear kernel-based formulation. In this paper, we propose to revisit the NMF as a multiobjective problem, particularly a biobjective one, where the objective functions defined in both input and feature spaces are taken into account. By taking the advantage of the sum-weighted method from the literature of multiobjective optimization, the proposed biobjective NMF determines a set of nondominated, Pareto optimal, solutions. Moreover, the corresponding Pareto front is approximated and studied. Experimental results on unmixing synthetic and real hyperspectral images confirm the efficiency of the proposed biobjective NMF compared with the state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gas Sources Parameters Estimation Using Machine Learning in WSNs.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n IEEE sensors journal, 16(14): 5795 - 5804. July 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Gas link\n  \n \n \n \"Gas paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{16.wsn.diffusion,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Gas Sources Parameters Estimation Using Machine Learning in {WSNs}",\n   journal =  "IEEE sensors journal",\n   year  =  "2016",\n   volume =  "16",\n   number =  "14",\n   pages =  "5795 - 5804",\n   month =  jul,\n   doi = "10.1109/JSEN.2016.2569559",\n   url_link= "https://ieeexplore.ieee.org/document/7470596", \n   url_paper   =  "http://honeine.fr/paul/publi/16.wsn.diffusion.pdf",\n   keywords  =  "machine learning, wireless sensor networks, gas diffusion,\nmachine learning, one-class classification, ridge regression, source parameter estimation, gas sensors, pollution measurement, explosions, multiple gas source parameter estimation",\n   abstract = "This paper introduces an original clusterized framework for the detection and estimation of the parameters of multiple gas sources in wireless sensor networks. The proposed method consists of defining a kernel-based detector that can detect gas releases within the network's clusters using concentration measures collected regularly from the network. Then, we define two kernel-based models that accurately estimate the gas release parameters, such as the sources locations and their release rates, using the collected concentrations.",\n}\n
\n
\n\n\n
\n This paper introduces an original clusterized framework for the detection and estimation of the parameters of multiple gas sources in wireless sensor networks. The proposed method consists of defining a kernel-based detector that can detect gas releases within the network's clusters using concentration measures collected regularly from the network. Then, we define two kernel-based models that accurately estimate the gas release parameters, such as the sources locations and their release rates, using the collected concentrations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimating the Intrinsic Dimension of Hyperspectral Images Using a Noise-Whitened Eigengap Approach.\n \n \n \n \n\n\n \n Halimi, A.; Honeine, P.; Kharouf, M.; Richard, C.; and Tourneret, J.\n\n\n \n\n\n\n IEEE Transactions on Geoscience and Remote Sensing, 54(7): 3811 - 3821. July 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Estimating link\n  \n \n \n \"Estimating paper\n  \n \n \n \"Estimating code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{16.eigengap,\n   author =  "Abderrahim Halimi and Paul Honeine and Malika Kharouf and Cédric Richard and Jean-Yves Tourneret",\n   title =  "Estimating the Intrinsic Dimension of Hyperspectral Images Using a Noise-Whitened Eigengap Approach",\n   journal =  "IEEE Transactions on Geoscience and Remote Sensing",\n   year  =  "2016",\n   volume =  "54",\n   number =  "7",\n   pages =  "3811 - 3821",\n   month =  jul,\n   url_link= "https://ieeexplore.ieee.org/document/7425180",\n   doi ="10.1109/TGRS.2016.2528298",\n   url_paper   =  "http://honeine.fr/paul/publi/16.eigengap.pdf",\n   url_code  =  "http://honeine.fr/paul/publi/16.eigengap.zip",\n   keywords  =  "statistics, hyperspectral, Eigengap approach, endmember number, hyperspectral imaging, linear spectral mixture, random matrix theory, sample covariance matrix, covariance matrices, eigenvalues and eigenfunctions, intrinsic dimension estimation, hyperspectral images, noise-whitened eigengap approach, linear mixture models, hyperspectral data representation, endmembers estimation, spiked population model, correlated noise",\n   abstract = "Linear mixture models are commonly used to represent a hyperspectral data cube as linear combinations of endmember spectra. However, determining the number of endmembers for images embedded in noise is a crucial task. This paper proposes a fully automatic approach for estimating the number of endmembers in hyperspectral images. The estimation is based on recent results of random matrix theory related to the so-called spiked population model. More precisely, we study the gap between successive eigenvalues of the sample covariance matrix constructed from high-dimensional noisy samples. The resulting estimation strategy is fully automatic and robust to correlated noise owing to the consideration of a noise-whitening step. This strategy is validated on both synthetic and real images. The experimental results are very promising and show the accuracy of this algorithm with respect to state-of-the-art algorithms.",\n}\n
\n
\n\n\n
\n Linear mixture models are commonly used to represent a hyperspectral data cube as linear combinations of endmember spectra. However, determining the number of endmembers for images embedded in noise is a crucial task. This paper proposes a fully automatic approach for estimating the number of endmembers in hyperspectral images. The estimation is based on recent results of random matrix theory related to the so-called spiked population model. More precisely, we study the gap between successive eigenvalues of the sample covariance matrix constructed from high-dimensional noisy samples. The resulting estimation strategy is fully automatic and robust to correlated noise owing to the consideration of a noise-whitening step. This strategy is validated on both synthetic and real images. The experimental results are very promising and show the accuracy of this algorithm with respect to state-of-the-art algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n In-network Principal Component Analysis and Diffusion Strategies.\n \n \n \n \n\n\n \n Ghadban, N.; Honeine, P.; Mourad-Chehade, F.; Francis, C.; and Farah, J.\n\n\n \n\n\n\n International Journal of Wireless Information Networks, 23(2): 97 - 111. June 2016.\n \n\n\n\n
\n\n\n\n \n \n \"In-network link\n  \n \n \n \"In-network paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{16.innetworkPCA,\n   author =  "Nisrine Ghadban and Paul Honeine and Farah Mourad-Chehade and Clovis Francis and Joumana Farah",\n   title =  "In-network Principal Component Analysis and Diffusion Strategies",\n   journal =  "International Journal of Wireless Information Networks",\n   year  =  "2016",\n   volume =  "23",\n   number =  "2",\n   pages =  "97 - 111",\n   month =  jun,\n   url_link =  "http://link.springer.com/article/10.1007/s10776-016-0308-1",\n   doi="10.1007/s10776-016-0308-1",\n   url_paper  =  "http://honeine.fr/paul/publi/15.pca_innetwork.pdf",\n   keywords  =  "machine learning, wireless sensor networks, principal component analysis, network, adaptive learning, distributed processing, dimensionality reduction ",\n   abstract = "Principal component analysis (PCA) is a very well-known statistical analysis technique. In its conventional formulation, it requires the eigen-decomposition of the sample covariance matrix. Due to its high-computational complexity and large memory requirements, the estimation of the covariance matrix and its eigen-decomposition do not scale up when dealing with big data, such as in large-scale networks. Numerous studies have been conducted to overcome this issue, often by partitioning the unknown matrix. In this paper, we propose a novel framework for estimating the principal axes, iteratively and in a distributed in-network scheme, without the need to estimate the covariance matrix. To this end, a coupling is operated between criteria for iterative PCA and several strategies for in-network processing. The investigated strategies can be grouped in two classes, noncooperative and cooperative such as information diffusion and consensus strategies. Theoretical results on the performance of these strategies are provided, as well as a convergence analysis. The performance of the proposed approach for in-network PCA is illustrated on diverse applications, such as image processing and time series in wireless sensor networks, with a comparison to state-of-the-art techniques.",\n}\n
\n
\n\n\n
\n Principal component analysis (PCA) is a very well-known statistical analysis technique. In its conventional formulation, it requires the eigen-decomposition of the sample covariance matrix. Due to its high-computational complexity and large memory requirements, the estimation of the covariance matrix and its eigen-decomposition do not scale up when dealing with big data, such as in large-scale networks. Numerous studies have been conducted to overcome this issue, often by partitioning the unknown matrix. In this paper, we propose a novel framework for estimating the principal axes, iteratively and in a distributed in-network scheme, without the need to estimate the covariance matrix. To this end, a coupling is operated between criteria for iterative PCA and several strategies for in-network processing. The investigated strategies can be grouped in two classes, noncooperative and cooperative such as information diffusion and consensus strategies. Theoretical results on the performance of these strategies are provided, as well as a convergence analysis. The performance of the proposed approach for in-network PCA is illustrated on diverse applications, such as image processing and time series in wireless sensor networks, with a comparison to state-of-the-art techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-parametric and semi-parametric RSSI/distance modeling for target tracking in wireless sensor networks.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n IEEE sensors journal, 16(7): 2115 - 2126. April 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Non-parametric link\n  \n \n \n \"Non-parametric paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{16.wsn.semiparam,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Non-parametric and semi-parametric {RSSI}/distance modeling for target tracking in wireless sensor networks",\n   journal =  "IEEE sensors journal",\n   year  =  "2016",\n   volume =  "16",\n   number =  "7",\n   pages =  "2115 - 2126",\n   month =  apr,\n   url_link= "https://ieeexplore.ieee.org/document/7360093",\n   doi="10.1109/JSEN.2015.2510020", \n   url_paper   =  "http://honeine.fr/paul/publi/16.wsn.semiparam.pdf",\n   keywords  =  "machine learning, wireless sensor networks, distance estimation, Kalman filter, machine learning, particle filter, radio-fingerprints, RSSI, target tracking, wireless sensor networks, nonparametric RSSI/distance modeling, semiparametric RSSI/distance modeling, radio-fingerprints database, kernel-based learning methods, nonparametric regression model, semiparametric regression model, log-distance theoretical propagation model, nonlinear fluctuation term, moving target tracking, received signal strength indicators",\n   abstract  = "This paper introduces two main contributions to the wireless sensor network (WSN) society. The first one consists of modeling the relationship between the distances separating sensors and the received signal strength indicators (RSSIs) exchanged by these sensors in an indoor WSN. In this context, two models are determined using a radio-fingerprints database and kernel-based learning methods. The first one is a non-parametric regression model, while the second one is a semi-parametric regression model that combines the well-known log-distance theoretical propagation model with a non-linear fluctuation term. As for the second contribution, it consists of tracking a moving target in the network using the estimated RSSI/distance models. The target's position is estimated by combining acceleration information and the estimated distances separating the target from sensors having known positions, using either the Kalman filter or the particle filter. A fully comprehensive study of the choice of parameters of the proposed distance models and their performances is provided, as well as a study of the performance of the two proposed tracking methods. Comparisons with recently proposed methods are also provided.",\n}\n\n
\n
\n\n\n
\n This paper introduces two main contributions to the wireless sensor network (WSN) society. The first one consists of modeling the relationship between the distances separating sensors and the received signal strength indicators (RSSIs) exchanged by these sensors in an indoor WSN. In this context, two models are determined using a radio-fingerprints database and kernel-based learning methods. The first one is a non-parametric regression model, while the second one is a semi-parametric regression model that combines the well-known log-distance theoretical propagation model with a non-linear fluctuation term. As for the second contribution, it consists of tracking a moving target in the network using the estimated RSSI/distance models. The target's position is estimated by combining acceleration information and the estimated distances separating the target from sensors having known positions, using either the Kalman filter or the particle filter. A fully comprehensive study of the choice of parameters of the proposed distance models and their performances is provided, as well as a study of the performance of the two proposed tracking methods. Comparisons with recently proposed methods are also provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Entropy of Overcomplete Kernel Dictionaries.\n \n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n Bulletin of Mathematical Sciences and Applications, 16: 1 - 19. August 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Entropy link\n  \n \n \n \"Entropy paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{16.sparse.entropy,\n   author =  "Paul Honeine",\n   title =  "Entropy of Overcomplete Kernel Dictionaries",\n   journal =  "Bulletin of Mathematical Sciences and Applications",\n   year =  "2016",\n   volume =  "16",\n   pages =  "1 - 19",\n   month =  aug,\n   url_link =  "https://www.scipress.com/BMSA.16.1",\n   url_paper      =  "http://honeine.fr/paul/publi/16.entropy.pdf",\n   keywords =  "machine learning, sparsity, adaptive filtering, dictionary learning, generalized Rényi entropy, Shannon entropy, Tsallis entropy",\n   abstract = "In signal analysis and synthesis, linear approximation theory considers a linear decomposition of any given signal in a set of atoms, collected into a so-called dictionary. Relevant sparse representations are obtained by relaxing the orthogonality condition of the atoms, yielding overcomplete dictionaries with an extended number of atoms. More generally than the linear decomposition, overcomplete kernel dictionaries provide an elegant nonlinear extension by defining the atoms through a mapping kernel function (e.g., the gaussian kernel). Models based on such kernel dictionaries are used in neural networks, gaussian processes and online learning with kernels. \nThe quality of an overcomplete dictionary is evaluated with a diversity measure the distance, the approximation, the coherence and the Babel measures. In this paper, we develop a framework to examine overcomplete kernel dictionaries with the entropy from information theory. Indeed, a higher value of the entropy is associated to a further uniform spread of the atoms over the space. For each of the aforementioned diversity measures, we derive lower bounds on the entropy. Several definitions of the entropy are examined, with an extensive analysis in both the input space and the mapped feature space.",\n}\n\n% At UTT\n\n\n
\n
\n\n\n
\n In signal analysis and synthesis, linear approximation theory considers a linear decomposition of any given signal in a set of atoms, collected into a so-called dictionary. Relevant sparse representations are obtained by relaxing the orthogonality condition of the atoms, yielding overcomplete dictionaries with an extended number of atoms. More generally than the linear decomposition, overcomplete kernel dictionaries provide an elegant nonlinear extension by defining the atoms through a mapping kernel function (e.g., the gaussian kernel). Models based on such kernel dictionaries are used in neural networks, gaussian processes and online learning with kernels. The quality of an overcomplete dictionary is evaluated with a diversity measure the distance, the approximation, the coherence and the Babel measures. In this paper, we develop a framework to examine overcomplete kernel dictionaries with the entropy from information theory. Indeed, a higher value of the entropy is associated to a further uniform spread of the atoms over the space. For each of the aforementioned diversity measures, we derive lower bounds on the entropy. Several definitions of the entropy are examined, with an extensive analysis in both the input space and the mapped feature space.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Filtering smooth altimetric signals using a Bayesian algorithm.\n \n \n \n \n\n\n \n Halimi, A.; Buller, G. S.; McLaughlin, S.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 23rd European Conference on Signal Processing (EUSIPCO), pages 2385-2389, Budapest, Hungary, 29 August–2 September 2016. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Filtering link\n  \n \n \n \"Filtering paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{16.eusipco.altimetry,\n   author =  "Abderrahim Halimi and Gerald S. Buller and Steve McLaughlin and Paul Honeine",\n   title =  "Filtering smooth altimetric signals using a Bayesian algorithm",\n   booktitle =  "Proc. 23rd European Conference on Signal Processing (EUSIPCO)",\n   address =  "Budapest, Hungary",\n   year =  "2016",\n   month =   "29~" # aug # "--" # "2~" # sep,\n   pages={2385-2389}, \n   acronym =  "EUSIPCO",\n   publisher = "IEEE",\n   url_link=  "https://ieeexplore.ieee.org/document/7760676",\n   doi = "10.1109/EUSIPCO.2016.7760676",\n   url_paper  =  "http://honeine.fr/paul/publi/16.eusipco.altimetry.pdf",\n   abstract={This paper presents a new Bayesian strategy for the estimation of smooth signals corrupted by Gaussian noise. The method assumes a smooth evolution of a succession of continuous signals that can have a numerical or an analytical expression with respect to some parameters. The Bayesian model proposed takes into account the Gaussian properties of the noise and the smooth evolution of the successive signals. In addition, a gamma Markov random field prior is assigned to the signal energies and to the noise variances to account for their known properties. The resulting posterior distribution is maximized using a fast coordinate descent algorithm whose parameters are updated by analytical expressions. The proposed algorithm is tested on satellite altimetric data demonstrating good denoising results on both synthetic and real signals. The proposed algorithm is also shown to improve the quality of the altimetric parameters when combined with a parameter estimation strategy.}, \n   keywords={Bayesian inference, altimetry, filtering theory, Gaussian noise, Markov processes, smooth altimetric signals filtering, Bayesian algorithm, Bayesian strategy, Bayesian model, Gaussian properties, gamma Markov random, posterior distribution, parameter estimation strategy, altimetric parameters, Signal processing algorithms, Bayes methods, Logic gates, Satellites, Estimation, Gaussian noise, Correlation, Altimetry, Bayesian algorithm, coordinate descent algorithm, gamma Markov random fields}, \n   ISSN={2076-1465}, \n}\n\n
\n
\n\n\n
\n This paper presents a new Bayesian strategy for the estimation of smooth signals corrupted by Gaussian noise. The method assumes a smooth evolution of a succession of continuous signals that can have a numerical or an analytical expression with respect to some parameters. The Bayesian model proposed takes into account the Gaussian properties of the noise and the smooth evolution of the successive signals. In addition, a gamma Markov random field prior is assigned to the signal energies and to the noise variances to account for their known properties. The resulting posterior distribution is maximized using a fast coordinate descent algorithm whose parameters are updated by analytical expressions. The proposed algorithm is tested on satellite altimetric data demonstrating good denoising results on both synthetic and real signals. The proposed algorithm is also shown to improve the quality of the altimetric parameters when combined with a parameter estimation strategy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear hyperspectral unmixing accounting for spatial illumination variability.\n \n \n \n \n\n\n \n Halimi, A.; Honeine, P.; Bioucas-Dias, J.; Buller, G. S.; and McLaughlin, S.\n\n\n \n\n\n\n In Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), Los Angeles, CA, United States, 21 - 24 August 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Nonlinear paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{16.whispers.variability,\n   author =  "Abderrahim Halimi and Paul Honeine and José Bioucas-Dias and Gerald S. Buller and Steve McLaughlin",\n   title =  "Nonlinear hyperspectral unmixing accounting for spatial illumination variability",\n   booktitle =  "Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS)",\n   address =  "Los Angeles, CA, United States",\n   year  =  "2016",\n   month =  "21 - 24~" # aug,\n   acronym =  "WHISPERS",\n   url_paper  =  "http://honeine.fr/paul/publi/16.whispers.variability_draft.pdf",\n   abstract={This paper presents a new supervised algorithm for nonlinear hyperspectral unmixing. Based on the residual component analysis model, the proposed model assumes the linear model to be corrupted by an additive term that accounts for bilinear interactions between the endmembers. The proposed formulation considers also the effect of the spatial illumination variability. The parameters of the proposed model are estimated using a Bayesian strategy. This approach introduces prior distributions on the parameters of interest to take into account their known constraints. The resulting posterior distribution is optimized using a coordinate descent algorithm which allows us to approximate the maximum a posteriori estimator of the unknown model parameters. The proposed model and estimation algorithm are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity when compared to the state-of-the-art algorithms.}, \n   keywords={Bayesian inference, hyperspectral, Bayes methods, computational complexity, hyperspectral imaging, image processing, maximum likelihood estimation, coordinate descent algorithm, unknown model parameters, estimation algorithm, state-of-the-art algorithms, spatial illumination variability, supervised algorithm, residual component analysis model, linear model, additive term, bilinear interactions, Bayesian strategy, prior distributions, maximum a posteriori estimator, posterior distribution, nonlinear hyperspectral unmixing, Bayes methods, Computational modeling, Hyperspectral imaging, Lighting, Inference algorithms, Estimation, Mixture models, Hyperspectral, nonlinear unmixing, Bayesian estimation, coordinate descent, gamma Markov random field}, \n   doi={10.1109/WHISPERS.2016.8071750}, \n   ISSN={2158-6276}, \n}\n
\n
\n\n\n
\n This paper presents a new supervised algorithm for nonlinear hyperspectral unmixing. Based on the residual component analysis model, the proposed model assumes the linear model to be corrupted by an additive term that accounts for bilinear interactions between the endmembers. The proposed formulation considers also the effect of the spatial illumination variability. The parameters of the proposed model are estimated using a Bayesian strategy. This approach introduces prior distributions on the parameters of interest to take into account their known constraints. The resulting posterior distribution is optimized using a coordinate descent algorithm which allows us to approximate the maximum a posteriori estimator of the unknown model parameters. The proposed model and estimation algorithm are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity when compared to the state-of-the-art algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ADMM for Maximum Correntropy Criterion.\n \n \n \n \n\n\n \n Zhu, F.; Halimi, A.; Honeine, P.; Chen, B.; and Zheng, N.\n\n\n \n\n\n\n In Proc. 28th (INNS and IEEE-CIS) International Joint Conference on Neural Networks, pages 1420-1427, Vancouver, Canada, 24 - 29 July 2016. \n \n\n\n\n
\n\n\n\n \n \n \"ADMM paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{16.ijcnn,\n   author =  "Fei Zhu and Abderrahim Halimi and Paul Honeine and Badong Chen and Nanning Zheng",\n   title =  {{ADMM} for Maximum Correntropy Criterion},\n   booktitle =  "Proc. 28th (INNS and IEEE-CIS) International Joint Conference on Neural Networks",\n   address =  "Vancouver, Canada",\n   year  =  "2016",\n   month =  "24 - 29~" # jul,\n   pages={1420-1427}, \n   acronym =  "IJCNN",\n   url_paper  =  "http://honeine.fr/paul/publi/16.ijcnn.correntropy.pdf",\n   abstract={The correntropy provides a robust criterion for outlier-insensitive machine learning, and its maximisation has been increasingly investigated in signal and image processing. In this paper, we investigate the problem of unmixing hyperspectral images, namely decomposing each pixel/spectrum of a given image as a linear combination of other pixels/spectra called endmembers. The coefficients of the combination need to be estimated subject to the nonnegativity and the sum-to-one constraints. In practice, some spectral bands suffer from low signal-to-noise ratio due to acquisition noise and atmospheric effects, thus requiring robust techniques for the unmixing problem. In this work, we cast the unmixing problem as the maximization of a correntropy criterion, and provide a relevant solution using the alternating direction method of multipliers (ADMM) method. Finally, the relevance of the proposed approach is validated on synthetic and real hyperspectral images, demonstrating that the correntropy-based unmixing is robust to outlier bands.}, \n   keywords={feature extraction, hyperspectral imaging, image denoising, image reconstruction, learning (artificial intelligence), maximum entropy methods, alternating direction method of multipliers, ADMM method, maximum correntropy criterion, outlier-insensitive machine learning, hyperspectral image unmixing, image decomposition, pixel linear combination, spectra linear combination, endmember extraction, nonnegativity constraints, sum-to-one constraints, signal-to-noise ratio, noise acquisition, atmospheric effects, Hyperspectral imaging, Robustness, Kernel, Optimization, Bandwidth, Linear programming, Correntropy, maximum correntropy estimation, alternating direction method of multipliers, hyperspectral image, unmixing problem}, \n   doi={10.1109/IJCNN.2016.7727365}, \n   ISSN={2161-4407}, \n}\n
\n
\n\n\n
\n The correntropy provides a robust criterion for outlier-insensitive machine learning, and its maximisation has been increasingly investigated in signal and image processing. In this paper, we investigate the problem of unmixing hyperspectral images, namely decomposing each pixel/spectrum of a given image as a linear combination of other pixels/spectra called endmembers. The coefficients of the combination need to be estimated subject to the nonnegativity and the sum-to-one constraints. In practice, some spectral bands suffer from low signal-to-noise ratio due to acquisition noise and atmospheric effects, thus requiring robust techniques for the unmixing problem. In this work, we cast the unmixing problem as the maximization of a correntropy criterion, and provide a relevant solution using the alternating direction method of multipliers (ADMM) method. Finally, the relevance of the proposed approach is validated on synthetic and real hyperspectral images, demonstrating that the correntropy-based unmixing is robust to outlier bands.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Zoning-based Localization in Indoor Sensor Networks Using Belief Functions Theory.\n \n \n \n \n\n\n \n AlShamaa, D.; Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 17th IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Edinburgh, UK, 3 - 6 July 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Zoning-based link\n  \n \n \n \"Zoning-based paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{16.spawc,\n   author =  "Daniel AlShamaa and Farah Chehade and Paul Honeine",\n   title =  "Zoning-based Localization in Indoor Sensor Networks Using Belief Functions Theory",\n   booktitle =  "Proc. 17th IEEE International Workshop on Signal Processing Advances in Wireless Communications (SPAWC)",\n   address =  "Edinburgh, UK",\n   year  =  "2016",\n   month =  "3 - 6~" # jul,\n   acronym =  "SPAWC",\n   url_link= "https://ieeexplore.ieee.org/document/7536787",\n   url_paper  =  "http://honeine.fr/paul/publi/16.spawc.belief.pdf",\n   abstract={Localization is an essential issue in wireless sensor networks to process the information retrieved by sensor nodes. This paper presents an indoor zoning-based localization technique that works efficiently in real environments. The targeted area is composed of several zones, the objective being to find the zone where the mobile node is instantly located. The proposed approach collects first strengths of received WiFi signals from neighboring access points and builds a fingerprints database. It then uses belief functions theory to combine all measured data and define an evidence framework, to be used afterwards for estimating the most probable node's zone. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.}, \n   keywords={wireless LAN, wireless sensor networks, zoning-based localization, indoor sensor networks, belief functions theory, wireless sensor networks, sensor nodes, mobile node, WiFi signals, evidence framework, IEEE 802.11 Standard, Mobile nodes, Databases, Indoor environments, Buildings, Wireless sensor networks, Belief functions, indoor environment, radio fingerprints, wireless sensor networks, zoning}, \n   doi={10.1109/SPAWC.2016.7536787}, \n}\n
\n
\n\n\n
\n Localization is an essential issue in wireless sensor networks to process the information retrieved by sensor nodes. This paper presents an indoor zoning-based localization technique that works efficiently in real environments. The targeted area is composed of several zones, the objective being to find the zone where the mobile node is instantly located. The proposed approach collects first strengths of received WiFi signals from neighboring access points and builds a fingerprints database. It then uses belief functions theory to combine all measured data and define an evidence framework, to be used afterwards for estimating the most probable node's zone. Real experiments demonstrate the effectiveness of this approach and its competence compared to state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust hyperspectral unmixing accounting for residual components.\n \n \n \n \n\n\n \n Halimi, A.; Honeine, P.; and Bioucas-Dias, J.\n\n\n \n\n\n\n In Proc. IEEE workshop on Statistical Signal Processing (SSP), Palma de Mallorca, Spain, 26 - 29 June 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Robust link\n  \n \n \n \"Robust paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{16.ssp,\n   author =  "Abderrahim Halimi and Paul Honeine and José Bioucas-Dias",\n   title =  "Robust hyperspectral unmixing accounting for residual components",\n   booktitle =  "Proc. IEEE workshop on Statistical Signal Processing (SSP)",\n   address =  "Palma de Mallorca, Spain",\n   year  =  "2016",\n   month =  "26 - 29~" # jun,\n   acronym =  "SSP",\n   url_link= "https://ieeexplore.ieee.org/document/7551848",\n   url_paper  =  "http://honeine.fr/paul/publi/16.ssp.robust.pdf",\n   abstract={This paper presents a new hyperspectral mixture model jointly with a Bayesian algorithm for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed formulation assumes the linear model to be corrupted by an additive term that accounts for mismodelling effects (ME). The ME formulation takes into account the effect of outliers, the propagated errors in the signal processing chain and copes with some types of endmember variability (EV) or nonlinearity (NL). The known constraints on the model parameters are modeled via suitable priors. The resulting posterior distribution is optimized using a coordinate descent algorithm which allows us to compute the maximum a posteriori estimator of the unknown model parameters. The proposed model and estimation algorithm are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity when compared to the state-of-the-art algorithms.}, \n   keywords={Bayesian inference, belief networks, learning (artificial intelligence), maximum likelihood estimation, source separation, robust supervised hyperspectral unmixing, Bayesian algorithm, residual component analysis model, mismodelling effects, ME formulation, endmember variability, EV, signal processing chain, posterior distribution, coordinate descent algorithm, maximum a posteriori estimator, source separation problem, Hyperspectral imaging, Signal processing algorithms, Bayes methods, Signal processing, Computational modeling, Inference algorithms, Mixture models, Hyperspectral imagery, robust unmixing, Bayesian estimation, coordinate descent algorithm, Gaussian process, gamma Markov random field}, \n   doi={10.1109/SSP.2016.7551848}, \n}\n
\n
\n\n\n
\n This paper presents a new hyperspectral mixture model jointly with a Bayesian algorithm for supervised hyperspectral unmixing. Based on the residual component analysis model, the proposed formulation assumes the linear model to be corrupted by an additive term that accounts for mismodelling effects (ME). The ME formulation takes into account the effect of outliers, the propagated errors in the signal processing chain and copes with some types of endmember variability (EV) or nonlinearity (NL). The known constraints on the model parameters are modeled via suitable priors. The resulting posterior distribution is optimized using a coordinate descent algorithm which allows us to compute the maximum a posteriori estimator of the unknown model parameters. The proposed model and estimation algorithm are validated on both synthetic and real images showing competitive results regarding the quality of the inferences and the computational complexity when compared to the state-of-the-art algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of cyberattacks in a water distribution system using machine learning techniques.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n In Proc. sixth International Conference on Digital Information Processing and Communications, pages 25-30, Beirut, Lebanon, 21 - 23 April 2016. \n \n\n\n\n
\n\n\n\n \n \n \"Detection link\n  \n \n \n \"Detection paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{16.patric,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "Detection of cyberattacks in a water distribution system using machine learning techniques",\n   booktitle =  "Proc. sixth International Conference on Digital Information Processing and Communications",\n   address =  "Beirut, Lebanon",\n   year  =  "2016",\n   month =  "21 - 23~" # apr,\n   pages={25-30}, \n   acronym =  "ICDIPC",\n   url_link= "https://ieeexplore.ieee.org/document/7470786",\n   url_paper  =  "http://honeine.fr/paul/publi/16.icdipc.cyberattacks.pdf",\n   abstract={Cyberattacks threatening the industrial processes and the critical infrastructures have become more and more complex, sophisticated, and hard to detect. These cyberattacks may cause serious economic losses and may impact the health and safety of employees and citizens. Traditional Intrusion Detection Systems (IDS) cannot detect new types of cyberattacks not existing in their databases. Therefore, IDS need a complementary help to provide a maximum protection to industrial systems against cyberattacks. In this paper, we propose to use machine learning techniques, in particular one-class classification, in order to bring the necessary and complementary help to IDS in detecting cyberattacks and intrusions. One-class classification algorithms have been used in many data mining applications, where the available samples in the training dataset refer to a unique/single class.We propose a simple one-class classification approach based on a new novelty measure, namely the truncated Mahalanobis distance in the feature space. The tests are conducted on a real dataset from the primary water distribution system in France, and the proposed approach is compared with other well-known one-class approaches.}, \n   keywords={machine learning, one-class, cybersecurity, data mining, learning (artificial intelligence), security of data, water supply, cyberattack detection, water distribution system, machine learning technique, industrial process, economic loss, intrusion detection system, IDS, one-class classification algorithm, data mining application, truncated Mahalanobis distance, France, Kernel, Computer crime, Support vector machines, Training, Covariance matrices, Classification algorithms, Quadratic programming, Cyberattack detection, kernel methods, Mahalanobis distance, one-class classification}, \n   doi={10.1109/ICDIPC.2016.7470786}, \n}\n\n%At UTT:\n\n
\n
\n\n\n
\n Cyberattacks threatening the industrial processes and the critical infrastructures have become more and more complex, sophisticated, and hard to detect. These cyberattacks may cause serious economic losses and may impact the health and safety of employees and citizens. Traditional Intrusion Detection Systems (IDS) cannot detect new types of cyberattacks not existing in their databases. Therefore, IDS need a complementary help to provide a maximum protection to industrial systems against cyberattacks. In this paper, we propose to use machine learning techniques, in particular one-class classification, in order to bring the necessary and complementary help to IDS in detecting cyberattacks and intrusions. One-class classification algorithms have been used in many data mining applications, where the available samples in the training dataset refer to a unique/single class.We propose a simple one-class classification approach based on a new novelty measure, namely the truncated Mahalanobis distance in the feature space. The tests are conducted on a real dataset from the primary water distribution system in France, and the proposed approach is compared with other well-known one-class approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel Nonnegative Matrix Factorization Without the Curse of the Pre-image — Application to Unmixing Hyperspectral Images.\n \n \n \n \n\n\n \n Zhu, F.; Honeine, P.; and Kallas, M.\n\n\n \n\n\n\n Technical Report ArXiv, March 2016.\n \n\n\n\n
\n\n\n\n \n \n \"Kernel paper\n  \n \n \n \"Kernel code\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@TECHREPORT{16.knmf,\n   author =  "Fei Zhu and Paul Honeine and Maya Kallas",\n   title =  "Kernel Nonnegative Matrix Factorization Without the Curse of the Pre-image --- Application to Unmixing Hyperspectral Images",\n   institution =  "ArXiv",\n   year =  "2016",\n   volume =  "1",\n   pages =  "1-13",\n   month =  mar,\n   keywords =  "machine learning, hyperspectral",\n   url_paper   =  "https://arxiv.org/abs/1407.4420v2",\n   url_code  =  "http://www.honeine.fr/paul/publi/16.kernelNMF.rar",\n   abstract = "The nonnegative matrix factorization (NMF) is widely used in signal and image processing, including bio-informatics, blind source separation and hyperspectral image analysis in remote sensing. A great challenge arises when dealing with a nonlinear formulation of the NMF. Within the framework of kernel machines, the models suggested in the literature do not allow the representation of the factorization matrices, which is a fallout of the curse of the pre-image. In this paper, we propose a novel kernel-based model for the NMF that does not suffer from the pre-image problem, by investigating the estimation of the factorization matrices directly in the input space. For different kernel functions, we describe two schemes for iterative algorithms: an additive update rule based on a gradient descent scheme and a multiplicative update rule in the same spirit as in the Lee and Seung algorithm. Within the proposed framework, we develop several extensions to incorporate constraints, including sparseness, smoothness, and spatial regularization with a total-variation-like penalty. The effectiveness of the proposed method is demonstrated with the problem of unmixing hyperspectral images, using well-known real images and results with state-of-the-art techniques.",\n}\n\n
\n
\n\n\n
\n The nonnegative matrix factorization (NMF) is widely used in signal and image processing, including bio-informatics, blind source separation and hyperspectral image analysis in remote sensing. A great challenge arises when dealing with a nonlinear formulation of the NMF. Within the framework of kernel machines, the models suggested in the literature do not allow the representation of the factorization matrices, which is a fallout of the curse of the pre-image. In this paper, we propose a novel kernel-based model for the NMF that does not suffer from the pre-image problem, by investigating the estimation of the factorization matrices directly in the input space. For different kernel functions, we describe two schemes for iterative algorithms: an additive update rule based on a gradient descent scheme and a multiplicative update rule in the same spirit as in the Lee and Seung algorithm. Within the proposed framework, we develop several extensions to incorporate constraints, including sparseness, smoothness, and spatial regularization with a total-variation-like penalty. The effectiveness of the proposed method is demonstrated with the problem of unmixing hyperspectral images, using well-known real images and results with state-of-the-art techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An eigenanalysis of data centering in machine learning.\n \n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n Technical Report ArXiv, March 2016.\n \n\n\n\n
\n\n\n\n \n \n \"An link\n  \n \n \n \"An paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@TECHREPORT{16.tpami.center,\n   author =  "Paul Honeine",\n   title =  "An eigenanalysis of data centering in machine learning",\n   institution =  "ArXiv",\n   year  =  "2016",\n   volume =  "1",\n   pages =  "1-13",\n   month =  mar,\n   keywords  =  "machine learning, sparsity, adaptive filtering",\n   url_link= "https://arxiv.org/abs/1407.2904",\n   url_paper  =  "http://www.honeine.fr/paul/publi/15.center.pdf",\n   abstract = "Many pattern recognition methods rely on statistical information from centered data, with the eigenanalysis of an empirical central moment, such as the covariance matrix in principal component analysis (PCA), as well as partial least squares regression, canonical-correlation analysis and Fisher discriminant analysis. Recently, many researchers advocate working on non-centered data. This is the case for instance with the singular value decomposition approach, with the (kernel) entropy component analysis, with the information-theoretic learning framework, and even with nonnegative matrix factorization. Moreover, one can also consider a non-centered PCA by using the second-order non-central moment. \nThe main purpose of this paper is to bridge the gap between these two viewpoints in designing machine learning methods. To provide a study at the cornerstone of kernel-based machines, we conduct an eigenanalysis of the inner product matrices from centered and non-centered data. We derive several results connecting their eigenvalues and their eigenvectors. Furthermore, we explore the outer product matrices, by providing several results connecting the largest eigenvectors of the covariance matrix and its non-centered counterpart. These results lay the groundwork to several extensions beyond conventional centering, with the weighted mean shift, the rank-one update, and the multidimensional scaling. Experiments conducted on simulated and real data illustrate the relevance of this work.",\n}%(in first revision, submitted in Feb. 2014)\n\n
\n
\n\n\n
\n Many pattern recognition methods rely on statistical information from centered data, with the eigenanalysis of an empirical central moment, such as the covariance matrix in principal component analysis (PCA), as well as partial least squares regression, canonical-correlation analysis and Fisher discriminant analysis. Recently, many researchers advocate working on non-centered data. This is the case for instance with the singular value decomposition approach, with the (kernel) entropy component analysis, with the information-theoretic learning framework, and even with nonnegative matrix factorization. Moreover, one can also consider a non-centered PCA by using the second-order non-central moment. The main purpose of this paper is to bridge the gap between these two viewpoints in designing machine learning methods. To provide a study at the cornerstone of kernel-based machines, we conduct an eigenanalysis of the inner product matrices from centered and non-centered data. We derive several results connecting their eigenvalues and their eigenvectors. Furthermore, we explore the outer product matrices, by providing several results connecting the largest eigenvectors of the covariance matrix and its non-centered counterpart. These results lay the groundwork to several extensions beyond conventional centering, with the weighted mean shift, the rank-one update, and the multidimensional scaling. Experiments conducted on simulated and real data illustrate the relevance of this work.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2015\n \n \n (18)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Kernel-Based Nonlinear Signal Processing.\n \n \n \n \n\n\n \n Bermudez, J. C. M.; Honeine, P.; Tourneret, J.; and Richard, C.\n\n\n \n\n\n\n In Coelho; Nascimento; Queiroz; Romano; and Cavalcante, editor(s), Signals and Images: Advances and Results in Speech, Estimation, Compression, Recognition, Filtering, and Processing, 2, pages 29 - 50. CRC Press, Taylor & Francis Group, August 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Kernel-Based link\n  \n \n \n \"Kernel-Based paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{15.kernel.chapter,\n   author =  "José C. M. Bermudez and Paul Honeine and Jean-Yves Tourneret and Cédric Richard",\n   title =  "Kernel-Based Nonlinear Signal Processing",\n   booktitle =  "Signals and Images: Advances and Results in Speech, Estimation, Compression, Recognition, Filtering, and Processing",\n   editor={Coelho and Nascimento and Queiroz and Romano and Cavalcante},\n   Publisher = {CRC Press, Taylor \\& Francis Group},\n   chapter = "2",\n   isbn =  "9781498722360",\n   Pages = "29 - 50",\n   year  =  "2015",\n   month = aug,\n   keywords =  "machine learning, sparsity, adaptive filtering",\n   url_link = {https://www.crcpress.com/Signals-and-Images-Advances-and-Results-in-Speech-Estimation-Compression/Coelho-Nascimento-de-Queiroz-Romano-Cavalcante/9781498722360?source=crcpress.com&utm_source=productpage&utm_medium=website&utm_campaign=RelatedTitles},\n   url_paper   =  "http://honeine.fr/paul/publi/15.kernel.chapter.pdf",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel variational approach for target tracking in a wireless sensor network.\n \n \n \n \n\n\n \n Snoussi, H.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Giovannelli, J.; and Idier, J., editor(s), Regularization and bayesian methods for inverse problems in signal and image processing, of Digital signal and image processing series, 10, pages 251 - 265. Wiley-ISTE, February 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Kernel link\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{15.inverse.chapter,\n   author =  "Hichem Snoussi and Paul Honeine and Cédric Richard",\n   title =  "Kernel variational approach for target tracking in a wireless sensor network",\n   booktitle =  "Regularization and bayesian methods for inverse problems in signal and image processing",\n   editor= "Jean-François Giovannelli and Jérôme Idier",\n   Publisher = "Wiley-ISTE",\n   series = "Digital signal and image processing series",\n   chapter = "10",\n   isbn =  "978-1-84821-637-2",\n   pages = "251 - 265",\n   year  =  "2015",\n   month = feb,\n   keywords =  "machine learning, Bayesian inference, wireless sensor networks",\n   url_link = {http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1848216378.html},\n   abstract = "The functions performed by wireless sensor networks have to adapt to the constraints of digital communications and energy limitations. The problem of monitoring a mobile object is resolved in a Bayesian framework based on a state model. The state model contains two equations: an equation reflecting the a priori that is already available about the trajectory of the target and a second equation linking the unknown state of the system to the observations that the sensors are collecting. The target tracking problem could be resolved in a Bayesian framework. This chapter presents the technical aspects of the local construction of a linear and Gaussian likelihood model by exploiting the measured data between sensors with known positions. It illustrates the effectiveness and robustness of the kernel VBA (DD?VF) algorithm for monitoring a moving target in a wireless sensor network, comparing it to the traditional variational filter with a known observation model.",\n}%url_paper  =  "http://honeine.fr/paul/publi/15.inverse.chapter.pdf"\n
\n
\n\n\n
\n The functions performed by wireless sensor networks have to adapt to the constraints of digital communications and energy limitations. The problem of monitoring a mobile object is resolved in a Bayesian framework based on a state model. The state model contains two equations: an equation reflecting the a priori that is already available about the trajectory of the target and a second equation linking the unknown state of the system to the observations that the sensors are collecting. The target tracking problem could be resolved in a Bayesian framework. This chapter presents the technical aspects of the local construction of a linear and Gaussian likelihood model by exploiting the measured data between sensors with known positions. It illustrates the effectiveness and robustness of the kernel VBA (DD?VF) algorithm for monitoring a moving target in a wireless sensor network, comparing it to the traditional variational filter with a known observation model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Abnormal event detection via multikernel learning for distributed camera networks.\n \n \n \n \n\n\n \n Wang, T.; Chen, J.; Honeine, P.; and Snoussi, H.\n\n\n \n\n\n\n International Journal of Distributed Sensor Networks, 2015(Article ID 989450): 1-9. 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Abnormal link\n  \n \n \n \"Abnormal paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{15.cam.abnormal,\n   author =  "Tian Wang and Jie Chen and Paul Honeine and Hichem Snoussi",\n   title =  "Abnormal event detection via multikernel learning\nfor distributed camera networks",\n   journal =  "International Journal of Distributed Sensor Networks",\n   year  =  "2015",\n   volume =  "2015",\n   number =  "Article ID 989450",\n   pages =  "1-9",\n   url_link =  "http://dx.doi.org/10.1155/2015/989450",\n   doi  =  "10.1155/2015/989450",\n   keywords  =  "machine learning, wireless sensor networks",\n   url_paper   =  "http://honeine.fr/paul/publi/15.cam.abnormal.pdf",\n   abstract = "Distributed camera networks play an important role in public security surveillance. Analyzing video sequences from cameras set at different angles will provide enhanced performance for detecting abnormal events. In this paper, an abnormal detection algorithm is proposed to identify unusual events captured by multiple cameras. The visual event is summarized and represented by the histogram of the optical flow orientation descriptor, and then a multikernel strategy that takes the multiview scenes into account is proposed to improve the detection accuracy. A nonlinear one-class SVM algorithm with the constructed kernel is then trained to detect abnormal frames of video sequences. We validate and evaluate the proposed method on the video surveillance dataset PETS.",\n}\n
\n
\n\n\n
\n Distributed camera networks play an important role in public security surveillance. Analyzing video sequences from cameras set at different angles will provide enhanced performance for detecting abnormal events. In this paper, an abnormal detection algorithm is proposed to identify unusual events captured by multiple cameras. The visual event is summarized and represented by the histogram of the optical flow orientation descriptor, and then a multikernel strategy that takes the multiview scenes into account is proposed to improve the detection accuracy. A nonlinear one-class SVM algorithm with the constructed kernel is then trained to detect abnormal frames of video sequences. We validate and evaluate the proposed method on the video surveillance dataset PETS.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analyzing sparse dictionaries for online learning with kernels.\n \n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Signal Processing, 63(23): 6343 - 6353. December 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Analyzing link\n  \n \n \n \"Analyzing paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{15.sparse.eigen,\n   author =  "Paul Honeine",\n   title =  "Analyzing sparse dictionaries for online learning with kernels",\n   journal =  "IEEE Transactions on Signal Processing",\n   year  =  "2015",\n   volume =  "63",\n   number =  "23",\n   pages =  "6343 - 6353",\n   month =  dec,\n   url_link =  "http://dx.doi.org/10.1109/TSP.2015.2457396",\n   doi  =  "10.1109/TSP.2015.2457396",\n   url_paper   =  "http://honeine.fr/paul/publi/15.sparse.eigen.pdf",\n   keywords =  "machine learning, sparsity, adaptive, filtering, approximation theory, eigenvalues and eigenfunctions, learning (artificial intelligence), signal processing, sparse dictionary, online learning, kernel based-learning, signal processing, machine learning method, linear-in-the-parameter model, sparse approximation, Babel measure, eigenvalue analysis, linear independence condition, quasiisometry, Kernel, Dictionaries, Optimization, Signal processing algorithms, Least squares approximations, Atomic measurements, Adaptive filtering, Gram matrix, kernel-based methods, machine learning, pattern recognition, sparse approximation",   \n   abstract={Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherence and the Babel measures. In this paper, we analyze sparse dictionaries based on these measures. By conducting an eigenvalue analysis, we show that these sparsity measures share many properties, including the linear independence condition and inducing a well-posed optimization problem. Furthermore, we prove that there exists a quasi-isometry between the parameter (i.e., dual) space and the dictionary's induced feature space.}, \n}\n
\n
\n\n\n
\n Many signal processing and machine learning methods share essentially the same linear-in-the-parameter model, with as many parameters as available samples as in kernel-based machines. Sparse approximation is essential in many disciplines, with new challenges emerging in online learning with kernels. To this end, several sparsity measures have been proposed in the literature to quantify sparse dictionaries and constructing relevant ones, the most prolific ones being the distance, the approximation, the coherence and the Babel measures. In this paper, we analyze sparse dictionaries based on these measures. By conducting an eigenvalue analysis, we show that these sparsity measures share many properties, including the linear independence condition and inducing a well-posed optimization problem. Furthermore, we prove that there exists a quasi-isometry between the parameter (i.e., dual) space and the dictionary's induced feature space.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear Adaptive Filtering using Kernel-based Algorithms with Dictionary Adaptation.\n \n \n \n \n\n\n \n Saidé, C.; Lengellé, R.; Honeine, P.; Richard, C.; and Achkar, R.\n\n\n \n\n\n\n International Journal of Adaptive Control and Signal Processing, 29(11): 1391 - 1410. November 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Nonlinear link\n  \n \n \n \"Nonlinear paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{15.sp.dictionary,\n   author =  "Chafic Saidé and Régis Lengellé and Paul Honeine and Cédric Richard and Roger Achkar",\n   title =  "Nonlinear Adaptive Filtering using Kernel-based Algorithms with Dictionary Adaptation",\n   journal =  "International Journal of Adaptive Control and Signal Processing",\n   year  =  "2015",\n   volume =  "29",\n   number = "11",\n   pages =  "1391 - 1410",\n   month = nov,\n   url_link =  "https://onlinelibrary.wiley.com/doi/full/10.1002/acs.2548",\n   doi  =  "10.1002/acs.2548",\n   keywords  =  "machine learning, sparsity, adaptive filtering",\n   url_paper  =  "http://www.honeine.fr/paul/publi/15.sp.dictionary.pdf",\n   abstract = "Nonlinear adaptive filtering has been extensively studied in the literature, using, for example, Volterra filters or neural networks. Recently, kernel methods have been offering an interesting alternative because they provide a simple extension of linear algorithms to the nonlinear case. The main drawback of online system identification with kernel methods is that the filter complexity increases with time, a limitation resulting from the representer theorem, which states that all past input vectors are required. To overcome this drawback, a particular subset of these input vectors (called dictionary) must be selected to ensure complexity control and good performance. Up to now, all authors considered that, after being introduced into the dictionary, elements stay unchanged even if, because of nonstationarity, they become useless to predict the system output. The objective of this paper is to present an adaptation scheme of dictionary elements, which are considered here as adjustable model parameters, by deriving a gradient?based method under collinearity constraints. The main interest is to ensure a better tracking performance. To evaluate our approach, dictionary adaptation is introduced into three well?known kernel?based adaptive algorithms: kernel recursive least squares, kernel normalized least mean squares, and kernel affine projection. The performance is evaluated on nonlinear adaptive filtering of simulated and real data sets. As confirmed by experiments, our dictionary adaptation scheme allows either complexity reduction or a decrease of the instantaneous quadratic error, or both simultaneously.",\n}\n
\n
\n\n\n
\n Nonlinear adaptive filtering has been extensively studied in the literature, using, for example, Volterra filters or neural networks. Recently, kernel methods have been offering an interesting alternative because they provide a simple extension of linear algorithms to the nonlinear case. The main drawback of online system identification with kernel methods is that the filter complexity increases with time, a limitation resulting from the representer theorem, which states that all past input vectors are required. To overcome this drawback, a particular subset of these input vectors (called dictionary) must be selected to ensure complexity control and good performance. Up to now, all authors considered that, after being introduced into the dictionary, elements stay unchanged even if, because of nonstationarity, they become useless to predict the system output. The objective of this paper is to present an adaptation scheme of dictionary elements, which are considered here as adjustable model parameters, by deriving a gradient?based method under collinearity constraints. The main interest is to ensure a better tracking performance. To evaluate our approach, dictionary adaptation is introduced into three well?known kernel?based adaptive algorithms: kernel recursive least squares, kernel normalized least mean squares, and kernel affine projection. The performance is evaluated on nonlinear adaptive filtering of simulated and real data sets. As confirmed by experiments, our dictionary adaptation scheme allows either complexity reduction or a decrease of the instantaneous quadratic error, or both simultaneously.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Approximation errors of online sparsification criteria.\n \n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Signal Processing, 63(17): 4700 - 4709. September 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Approximation link\n  \n \n \n \"Approximation paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{15.sparse.errors,\n   author =  "Paul Honeine",\n   title =  "Approximation errors of online sparsification criteria",\n   journal =  "IEEE Transactions on Signal Processing",\n   year  =  "2015",\n   volume =  "63",\n   number =  "17",\n   pages =  "4700 - 4709",\n   month =  sep,\n   url_link =  "http://dx.doi.org/10.1109/TSP.2015.2442960",   \n   doi  =  "10.1109/TSP.2015.2442960",   \n   url_paper   =  "http://honeine.fr/paul/publi/15.sparse.errors.pdf",\n   keywords  =  "machine learning, sparsity, adaptive filtering, approximation theory, learning (artificial intelligence), principal component analysis, signal processing, kernel principal component analysis, online learning paradigm, machine learning frameworks, online sparsification criteria, approximation errors, Kernel, Coherence, Computational modeling, Dictionaries, Principal component analysis, Least squares approximations, Adaptive filtering, gram matrix, kernel-based methods, machine learning, online learning, pattern recognition, resource-allocating networks, sparse approximation, sparsification criteria",\n   abstract={Many machine learning frameworks, such as resource-allocating networks, kernel-based methods, Gaussian processes, and radial-basis-function networks, require a sparsification scheme in order to address the online learning paradigm. For this purpose, several online sparsification criteria have been proposed to restrict the model definition on a subset of samples. The most known criterion is the (linear) approximation criterion, which discards any sample that can be well represented by the already contributing samples, an operation with excessive computational complexity. Several computationally efficient sparsification criteria have been introduced in the literature with the distance and the coherence criteria. This paper provides a unified framework that connects these sparsification criteria in terms of approximating samples, by establishing theoretical bounds on the approximation errors. Furthermore, the error of approximating any pattern is investigated, by proposing upper bounds on the approximation error for each of the aforementioned sparsification criteria. Two classes of fundamental patterns are described in detail, the centroid (i.e., empirical mean) and the principal axes in the kernel principal component analysis. Experimental results show the relevance of the theoretical results established in this paper.},\n}\n
\n
\n\n\n
\n Many machine learning frameworks, such as resource-allocating networks, kernel-based methods, Gaussian processes, and radial-basis-function networks, require a sparsification scheme in order to address the online learning paradigm. For this purpose, several online sparsification criteria have been proposed to restrict the model definition on a subset of samples. The most known criterion is the (linear) approximation criterion, which discards any sample that can be well represented by the already contributing samples, an operation with excessive computational complexity. Several computationally efficient sparsification criteria have been introduced in the literature with the distance and the coherence criteria. This paper provides a unified framework that connects these sparsification criteria in terms of approximating samples, by establishing theoretical bounds on the approximation errors. Furthermore, the error of approximating any pattern is investigated, by proposing upper bounds on the approximation error for each of the aforementioned sparsification criteria. Two classes of fundamental patterns are described in detail, the centroid (i.e., empirical mean) and the principal axes in the kernel principal component analysis. Experimental results show the relevance of the theoretical results established in this paper.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel-based machine learning using radio-fingerprints for localization in WSNs.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n IEEE Transactions on Aerospace and Electronic Systems, 51(2): 1324 - 1336. April 2015.\n \n\n\n\n
\n\n\n\n \n \n \"Kernel-based paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{15.wsn_loc,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Kernel-based machine learning using radio-fingerprints for localization in {WSNs}",\n   journal =  "IEEE Transactions on Aerospace and Electronic Systems",\n   year  =  "2015",\n   volume =  "51",\n   number =  "2",\n   pages =  "1324 - 1336",\n   month =  apr,\n   doi="10.1109/TAES.2015.140061", \n   url_paper  =  "http://www.honeine.fr/paul/publi/15.wsn_loc.pdf",\n   keywords  =  "machine learning, wireless sensor networks, learning (artificial intelligence), regression analysis, sensor placement, support vector machines, telecommunication computing, wireless sensor networks, vector-output regularized least squares, support vector regression, ridge regression, received signal strength indicators, radio-location fingerprinting, sensors localization, WSNS, kernel-based machine-learning techniques, Optimization, Sensors, Wireless sensor networks, Kernel, Databases, Computational modeling, Mathematical model",\n   abstract={This paper introduces an original method for sensors localization in WSNs. Based on radio-location fingerprinting and machine learning, the method consists of defining a model whose inputs and outputs are, respectively, the received signal strength indicators and the sensors locations. To define this model, several kernel-based machine-learning techniques are investigated, such as the ridge regression, support vector regression, and vector-output regularized least squares. The performance of the method is illustrated using both simulated and real data.}, \n}\n
\n
\n\n\n
\n This paper introduces an original method for sensors localization in WSNs. Based on radio-location fingerprinting and machine learning, the method consists of defining a model whose inputs and outputs are, respectively, the received signal strength indicators and the sensors locations. To define this model, several kernel-based machine-learning techniques are investigated, such as the ridge regression, support vector regression, and vector-output regularized least squares. The performance of the method is illustrated using both simulated and real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Eviter la malédiction de pré-image : application à la factorisation en matrices non négatives à noyaux.\n \n \n \n \n\n\n \n Honeine, P.; and Zhu, F.\n\n\n \n\n\n\n In Actes du 25-ème Colloque GRETSI sur le Traitement du Signal et des Images, Lyon, France, September 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Eviter paper\n  \n \n \n \"Eviter code\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.gretsi.preimage,\n   author =  "Paul Honeine and Fei Zhu",\n   title =  "Eviter la malédiction de pré-image : application à la factorisation en matrices non négatives à noyaux",\n   booktitle =  "Actes du 25-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Lyon, France",\n   year  =  "2015",\n   month =  sep,\n   keywords  =  "machine learning, hyperspectral",\n   acronym =  "GRETSI'15",\n   url_paper  =  "http://honeine.fr/paul/publi/15.gretsi.preimage.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/16.kernelNMF.rar",\n   abstract = "Kernel-based methods rely on a nonlinear transformation of available data. The so-called pre-image problem is the estimation of the inverse transformation. Solving this problem is of great interest in many fields, including pattern recognition and denoising problems. In this paper, we propose to overcome the pre-image problem by defining a novel model. This approach is illustrated on the nonnegative matrix factorization problem. The relevance of the proposed approach is illustrated on the unmixing problem in hyperspectral imagery.",\n}\n
\n
\n\n\n
\n Kernel-based methods rely on a nonlinear transformation of available data. The so-called pre-image problem is the estimation of the inverse transformation. Solving this problem is of great interest in many fields, including pattern recognition and denoising problems. In this paper, we propose to overcome the pre-image problem by defining a novel model. This approach is illustrated on the nonnegative matrix factorization problem. The relevance of the proposed approach is illustrated on the unmixing problem in hyperspectral imagery.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation de la dimension intrinsèque des images hyperspectrales à l'aide d'un modèle à variances isolées.\n \n \n \n \n\n\n \n Halimi, A.; Honeine, P.; Kharouf, M.; Richard, C.; and Tourneret, J.\n\n\n \n\n\n\n In Actes du 25-ème Colloque GRETSI sur le Traitement du Signal et des Images, Lyon, France, September 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Estimation paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.gretsi.eigengap,\n   author =  "Abderrahim Halimi and Paul Honeine and Malika Kharouf and Cédric Richard and Jean-Yves Tourneret",\n   title =  "Estimation de la dimension intrinsèque des images hyperspectrales à l'aide d'un modèle à variances isolées",\n   booktitle =  "Actes du 25-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Lyon, France",\n   year  =  "2015",\n   month =  sep,\n   keywords  =  "statistics, hyperspectral",\n   acronym =  "GRETSI'15",\n   url_paper  =  "http://honeine.fr/paul/publi/15.gretsi.eigengap.pdf",\n   abstract = "This paper proposes a fully automatic approach for estimating the number of endmembers in hyperspectral images. The estimation is based on recent results of random matrix theory related to the so-called spiked population model. More precisely, we study the gap between successive eigenvalues of the sample covariance matrix constructed from high dimensional noisy samples. The resulting estimation strategy is unsupervised and robust to correlated noise. This strategy is validated on both synthetic and real images. The experimental results are very promising and show the accuracy of this algorithm with respect to state-of-the-art algorithms",\n}\n
\n
\n\n\n
\n This paper proposes a fully automatic approach for estimating the number of endmembers in hyperspectral images. The estimation is based on recent results of random matrix theory related to the so-called spiked population model. More precisely, we study the gap between successive eigenvalues of the sample covariance matrix constructed from high dimensional noisy samples. The resulting estimation strategy is unsupervised and robust to correlated noise. This strategy is validated on both synthetic and real images. The experimental results are very promising and show the accuracy of this algorithm with respect to state-of-the-art algorithms\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modèle semi-paramétrique RSSI/distance pour le suivi d'une cible dans les réseaux de capteurs sans fil.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n In Actes du 25-ème Colloque GRETSI sur le Traitement du Signal et des Images, Lyon, France, September 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Modèle paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.gretsi.wsn,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Modèle semi-paramétrique RSSI/distance pour le suivi d'une cible dans les réseaux de capteurs sans fil",\n   booktitle =  "Actes du 25-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Lyon, France",\n   year  =  "2015",\n   month =  sep,\n   keywords  =  "machine learning, wireless sensor networks",\n   acronym =  "GRETSI'15",\n   url_paper  =  "http://honeine.fr/paul/publi/15.gretsi.wsn.pdf",\n   abstract = "– This paper introduces two main contributions to the wireless sensor network domain. The first one consists of determining, using a semi-parametric model, the relationship between the distances separating sensors and the received signal strength indicators (RSSIs) of the signals exchanged by these sensors in a network. As for the second contribution, it consists in tracking a moving target in the network using the estimated RSSI/distance model. The target's position is estimated by combining acceleration information and the estimated distances separating the target from sensors having known positions, using either a Kalman or a particle filter.",\n}\n\n
\n
\n\n\n
\n – This paper introduces two main contributions to the wireless sensor network domain. The first one consists of determining, using a semi-parametric model, the relationship between the distances separating sensors and the received signal strength indicators (RSSIs) of the signals exchanged by these sensors in a network. As for the second contribution, it consists in tracking a moving target in the network using the estimated RSSI/distance model. The target's position is estimated by combining acceleration information and the estimated distances separating the target from sensors having known positions, using either a Kalman or a particle filter.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online nonnegative matrix factorization based on kernel machines.\n \n \n \n \n\n\n \n Zhu, F.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 23rd European Conference on Signal Processing (EUSIPCO), pages 2381 - 2385, Nice, France, 31 August–4 September 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Online link\n  \n \n \n \"Online paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.eusipco.online_knmf,\n   author =  "Fei Zhu and Paul Honeine",\n   title =  "Online nonnegative matrix factorization based on kernel machines",\n   booktitle =  "Proc. 23rd European Conference on Signal Processing (EUSIPCO)",\n   address =  "Nice, France",\n   year  =  "2015",\n   month =  "31~" # aug # "--" # "4~" # sep,\n   pages    =  {2381 - 2385},\n   doi={10.1109/EUSIPCO.2015.7362811},\n   acronym =  "EUSIPCO",\n   url_link= "https://ieeexplore.ieee.org/document/7362811",\n   url_paper  =  "http://honeine.fr/paul/publi/15.eusipco.online_knmf.pdf",\n   abstract={Nonnegative matrix factorization (NMF) has been increasingly investigated for data analysis and dimension-reduction. To tackle large-scale data, several online techniques for NMF have been introduced recently. So far, the online NMF has been limited to the linear model. This paper develops an online version of the nonlinear kernel-based NMF, where the decomposition is performed in the feature space. Taking the advantage of the stochastic gradient descent and the mini-batch scheme, the proposed method has a fixed, tractable complexity independent of the increasing samples number. We derive the multiplicative update rules of the general form, and describe in detail the case of the Gaussian kernel. The effectiveness of the proposed method is validated on unmixing hyperspectral images, compared with the state-of-the-art online NMF methods.}, \n   keywords={data analysis, Gaussian processes, geophysical image processing, gradient methods, hyperspectral imaging, matrix decomposition, stochastic processes, online nonnegative matrix factorization, kernel machine, data analysis, dimension-reduction, nonlinear kernel-based NMF, stochastic gradient descent scheme, minibatch scheme, Gaussian kernel, unmixing hyperspectral imaging, Kernel, Encoding, Linear programming, Europe, Signal processing, Stochastic processes, Computational complexity, Nonnegative matrix factorization, online learning, kernel machines, hyperspectral unmixing}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n Nonnegative matrix factorization (NMF) has been increasingly investigated for data analysis and dimension-reduction. To tackle large-scale data, several online techniques for NMF have been introduced recently. So far, the online NMF has been limited to the linear model. This paper develops an online version of the nonlinear kernel-based NMF, where the decomposition is performed in the feature space. Taking the advantage of the stochastic gradient descent and the mini-batch scheme, the proposed method has a fixed, tractable complexity independent of the increasing samples number. We derive the multiplicative update rules of the general form, and describe in detail the case of the Gaussian kernel. The effectiveness of the proposed method is validated on unmixing hyperspectral images, compared with the state-of-the-art online NMF methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Shrinkage methods for one-class classification.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n In Proc. 23rd European Conference on Signal Processing (EUSIPCO), pages 135-139, Nice, France, 31 August–4 September 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Shrinkage paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.eusipco.one_class,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "Shrinkage methods for one-class classification",\n   booktitle =  "Proc. 23rd European Conference on Signal Processing (EUSIPCO)",\n   address =  "Nice, France",\n   year  =  "2015",\n   month =  "31~" # aug # "--" # "4~" # sep,\n   pages    =  {135-139},\n   acronym =  "EUSIPCO",\n   url_paper  =  "http://honeine.fr/paul/publi/15.eusipco.oneclass.pdf",\n   abstract={Over the last decades, machine learning techniques have been an important asset for detecting nonlinear relations in data. In particular, one-class classification has been very popular in many fields, specifically in applications where the available data refer to a unique class only. In this paper, we propose a sparse approach for one-class classification problems. We define the one-class by the hypersphere enclosing the samples in the Reproducing Kernel Hilbert Space, where the center of this hypersphere depends only on a small fraction of the training dataset. The selection of the most relevant samples is achieved through shrinkage methods, namely Least Angle Regression, Least Absolute Shrinkage and Selection Operator, and Elastic Net. We modify these selection methods and adapt them for estimating the one-class center in the RKHS. We compare our algorithms to well-known one-class methods, and the experimental analysis are conducted on real datasets.}, \n   keywords={machine learning, one-class, cybersecurity, compressed sensing, Hilbert spaces, learning (artificial intelligence), regression analysis, shrinkage, signal classification, shrinkage methods, one-class classification, machine learning techniques, kernel Hilbert space, least angle regression, least absolute shrinkage, selection operator, elastic net, Kernel, Signal processing algorithms, Training, Support vector machines, Mathematical model, Correlation, Europe, One-class classification, kernel methods, shrinkage methods}, \n   doi={10.1109/EUSIPCO.2015.7362360}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n Over the last decades, machine learning techniques have been an important asset for detecting nonlinear relations in data. In particular, one-class classification has been very popular in many fields, specifically in applications where the available data refer to a unique class only. In this paper, we propose a sparse approach for one-class classification problems. We define the one-class by the hypersphere enclosing the samples in the Reproducing Kernel Hilbert Space, where the center of this hypersphere depends only on a small fraction of the training dataset. The selection of the most relevant samples is achieved through shrinkage methods, namely Least Angle Regression, Least Absolute Shrinkage and Selection Operator, and Elastic Net. We modify these selection methods and adapt them for estimating the one-class center in the RKHS. We compare our algorithms to well-known one-class methods, and the experimental analysis are conducted on real datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Gossip algorithms for principal component analysis in networks.\n \n \n \n \n\n\n \n Ghadban, N.; Honeine, P.; Mourad-Chehade, F.; Farah, J.; and Francis, C.\n\n\n \n\n\n\n In Proc. 23rd European Conference on Signal Processing (EUSIPCO), pages 2366-2370, Nice, France, 31 August–4 September 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Gossip link\n  \n \n \n \"Gossip paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.eusipco.gossip_pca,\n   author =  "Nisrine Ghadban and Paul Honeine and Farah Mourad-Chehade and Joumana Farah and Clovis Francis",\n   title =  "Gossip algorithms for principal component analysis in networks",\n   booktitle =  "Proc. 23rd European Conference on Signal Processing (EUSIPCO)",\n   address =  "Nice, France",\n   year  =  "2015",\n   month =  "31~" # aug # "--" # "4~" # sep,\n   pages    =  "2366-2370",\n   acronym =  "EUSIPCO",\n   url_link= "https://ieeexplore.ieee.org/document/7362808",\n   url_paper  =  "http://honeine.fr/paul/publi/15.eusipco.gossip_pca.pdf",\n   abstract={This paper deals with the issues of the dimensionality reduction and the extraction of the structure of data using principal component analysis for the multivariable data in large-scale networks. In order to overcome the high computational complexity of this technique, we derive several in-network strategies to estimate the principal axes without the need for computing the sample covariance matrix. To this aim, we propose to combine Oja's iterative rule with average gossiping algorithms. Gossiping is used as a solution for communication between asynchronous nodes. The performance of the proposed approach is illustrated on time series acquisition in wireless sensor networks.}, \n   keywords={computational complexity, iterative methods, principal component analysis, time series, wireless sensor networks, gossip algorithms, principal component analysis, large-scale networks, computational complexity, Oja, iterative rule, asynchronous nodes, time series acquisition, wireless sensor networks, Principal component analysis, Signal processing algorithms, Algorithm design and analysis, Cost function, Routing, Signal processing, Data mining, Gossip averaging, principal component analysis, in-network processing, adaptive learning, distributed processing}, \n   doi={10.1109/EUSIPCO.2015.7362808}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n This paper deals with the issues of the dimensionality reduction and the extraction of the structure of data using principal component analysis for the multivariable data in large-scale networks. In order to overcome the high computational complexity of this technique, we derive several in-network strategies to estimate the principal axes without the need for computing the sample covariance matrix. To this aim, we propose to combine Oja's iterative rule with average gossiping algorithms. Gossiping is used as a solution for communication between asynchronous nodes. The performance of the proposed approach is illustrated on time series acquisition in wireless sensor networks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unmixing Multitemporal Hyperspectral Images Accounting for Endmember Variability.\n \n \n \n \n\n\n \n Halimi, A.; Dobigeon, N.; Tourneret, J.; McLaughlin, S.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 23rd European Conference on Signal Processing (EUSIPCO), pages 1656-1660, Nice, France, 31 August–4 September 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Unmixing link\n  \n \n \n \"Unmixing paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.eusipco.variability,\n   author =  "Abderrahim Halimi and Nicolas Dobigeon and Jean-Yves Tourneret and Steve McLaughlin and Paul Honeine",\n   title =  "Unmixing Multitemporal Hyperspectral Images Accounting for Endmember Variability",\n   booktitle =  "Proc. 23rd European Conference on Signal Processing (EUSIPCO)",\n   address =  "Nice, France",\n   year  =  "2015",\n   month =  "31~" # aug # "--" # "4~" # sep,\n   pages    =  {1656-1660},\n   acronym =  "EUSIPCO",\n   url_link= "https://ieeexplore.ieee.org/document/7362665",\n   url_paper  =  "http://honeine.fr/paul/publi/15.eusipco.variability.pdf",\n   abstract={This paper proposes an unsupervised Bayesian algorithm for unmixing successive hyperspectral images while accounting for temporal and spatial variability of the endmembers. Each image pixel is modeled as a linear combination of the end-members weighted by their corresponding abundances. Spatial endmember variability is introduced by considering the normal compositional model that assumes variable endmembers for each image pixel. A prior enforcing a smooth temporal variation of both endmembers and abundances is considered. The proposed algorithm estimates the mean vectors and covariance matrices of the endmembers and the abundances associated with each image. Since the estimators are difficult to express in closed form, we propose to sample according to the posterior distribution of interest and use the generated samples to build estimators. The performance of the proposed Bayesian model and the corresponding estimation algorithm is evaluated by comparison with other unmixing algorithms on synthetic images.}, \n   keywords={Bayesian inference, hyperspectral, covariance matrices, hyperspectral imaging, image processing, statistical distributions, unmixing multitemporal hyperspectral image, unsupervised Bayesian algorithm, image pixel, endmember temporal variability, endmember spatial variability, normal compositional model, smooth temporal variation, mean vector estimation, covariance matrices, posterior distribution, Signal processing algorithms, Bayes methods, Hyperspectral imaging, Europe, Signal processing, Covariance matrices, Indexes, Hyperspectral unmixing, spectral variability, temporal and spatial variability, Bayesian algorithm, Hamiltonian Monte-Carlo, MCMC methods}, \n   doi={10.1109/EUSIPCO.2015.7362665}, \n   ISSN={2076-1465}, \n}\n\n
\n
\n\n\n
\n This paper proposes an unsupervised Bayesian algorithm for unmixing successive hyperspectral images while accounting for temporal and spatial variability of the endmembers. Each image pixel is modeled as a linear combination of the end-members weighted by their corresponding abundances. Spatial endmember variability is introduced by considering the normal compositional model that assumes variable endmembers for each image pixel. A prior enforcing a smooth temporal variation of both endmembers and abundances is considered. The proposed algorithm estimates the mean vectors and covariance matrices of the endmembers and the abundances associated with each image. Since the estimators are difficult to express in closed form, we propose to sample according to the posterior distribution of interest and use the generated samples to build estimators. The performance of the proposed Bayesian model and the corresponding estimation algorithm is evaluated by comparison with other unmixing algorithms on synthetic images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral unmixing accounting for spatial correlations and endmember variability.\n \n \n \n \n\n\n \n Halimi, A.; Dobigeon, N.; Tourneret, J.; and Honeine, P.\n\n\n \n\n\n\n In Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), Tokyo, Japan, 2 - 5 June 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Hyperspectral link\n  \n \n \n \"Hyperspectral paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.whispers.variability,\n   author =  "Abderrahim Halimi and Nicolas Dobigeon and Jean-Yves Tourneret and Paul Honeine",\n   title =  "Hyperspectral unmixing accounting for spatial correlations and endmember variability",\n   booktitle =  "Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS)",\n   address =  "Tokyo, Japan",\n   year  =  "2015",\n   month =  "2 - 5~" # jun,\n   acronym =  "WHISPERS",\n   url_link= "https://ieeexplore.ieee.org/document/8075442",\n   url_paper  =  "http://honeine.fr/paul/publi/15.whispers.variability.pdf",\n   abstract={This paper presents an unsupervised Bayesian algorithm for hyper-spectral image unmixing accounting for endmember variability. This variability is obtained by assuming that each pixel is a linear combination of random endmembers weighted by their corresponding abundances. An additive noise is also considered in the proposed model generalizing the normal compositional model. The proposed model is unsupervised since it estimates the abundances and both the mean and the covariance matrix of each endmember. A classification map indicating the class of each pixel is also obtained based on the estimated abundances. Simulations conducted on a real dataset show the potential of the proposed model in terms of unmixing performance for the analysis of hyperspectral images.}, \n   keywords={Bayesian inference, hyperspectral, Bayes methods, covariance matrices, geophysical image processing, hyperspectral imaging, image classification, normal compositional model, spatial correlations, endmember variability, unsupervised Bayesian algorithm, random endmembers, additive noise, hyper-spectral image unmixing, mean matrix, covariance matrix, classification map, Bayes methods, Hyperspectral imaging, Correlation, Estimation, Markov processes, Standards, Covariance matrices, Hyperspectral imagery, endmember variability, image classification, Markov chain Monte-Carlo}, \n   doi={10.1109/WHISPERS.2015.8075442}, \n   ISSN={2158-6276}, \n}\n
\n
\n\n\n
\n This paper presents an unsupervised Bayesian algorithm for hyper-spectral image unmixing accounting for endmember variability. This variability is obtained by assuming that each pixel is a linear combination of random endmembers weighted by their corresponding abundances. An additive noise is also considered in the proposed model generalizing the normal compositional model. The proposed model is unsupervised since it estimates the abundances and both the mean and the covariance matrix of each endmember. A classification map indicating the class of each pixel is also obtained based on the estimated abundances. Simulations conducted on a real dataset show the potential of the proposed model in terms of unmixing performance for the analysis of hyperspectral images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pareto front of bi-objective kernel-based nonnegative matrix factorization.\n \n \n \n \n\n\n \n Zhu, F.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), pages 585 - 590, Bruges, Belgium, 22 - 24 April 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Pareto paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.esann.nmf,\n   author =  "Fei Zhu and Paul Honeine",\n   title =  "Pareto front of bi-objective kernel-based nonnegative matrix factorization",\n   booktitle =  "Proc. 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN)",\n   address =  "Bruges, Belgium",\n   year  =  "2015",\n   month =  "22 - 24~" # apr,\n   pages    =  {585 - 590},\n   isbn =  "978-287587014-8",\n   keywords  =  "machine learning, hyperspectral, nonnegative matrix factorization , dimensionality reduction, multi-objective optimization, bi-objective optimization problem, kernel methods, nonlinear mode, sum-weighted approach, Pareto front approximation, Pareto optimal, NMF,  biobjective nonnegative matrix factorization, Pareto optimisation, Pareto optimization",\n   acronym =  "ESANN",\n   url_paper  =  "http://honeine.fr/paul/publi/15.esann.nmf.pdf",\n   isbn = "978-287587014-8",\n   abstract = "The nonnegative matrix factorization (NMF) is a powerful data analysis and dimensionality reduction technique. So far, the NMF has been limited to a single-objective problem in either its linear or nonlinear kernel-based formulation. This paper presents a novel bi-objective NMF model based on kernel machines, where the decomposition is performed simultaneously in both input and feature spaces. The problem is solved employing the sum-weighted approach. Without loss of generality, we study the case of the Gaussian kernel, where the multiplicative update rules are derived and the Pareto front is approximated. The performance of the proposed method is demonstrated for unmixing hyperspectral images",\n}\n
\n
\n\n\n
\n The nonnegative matrix factorization (NMF) is a powerful data analysis and dimensionality reduction technique. So far, the NMF has been limited to a single-objective problem in either its linear or nonlinear kernel-based formulation. This paper presents a novel bi-objective NMF model based on kernel machines, where the decomposition is performed simultaneously in both input and feature spaces. The problem is solved employing the sum-weighted approach. Without loss of generality, we study the case of the Gaussian kernel, where the multiplicative update rules are derived and the Pareto front is approximated. The performance of the proposed method is demonstrated for unmixing hyperspectral images\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online One-class Classification for Intrusion Detection Based on the Mahalanobis Distance.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n In Proc. 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN), pages 567 - 572, Bruges, Belgium, 22 - 24 April 2015. \n \n\n\n\n
\n\n\n\n \n \n \"Online paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.esann.oneclass,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "Online One-class Classification for Intrusion Detection Based on the Mahalanobis Distance",\n   booktitle =  "Proc. 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN)",\n   address =  "Bruges, Belgium",\n   year  =  "2015",\n   month =  "22 - 24~" # apr,\n   pages    =  {567 - 572},\n   isbn = "978-287587014-8",\n   acronym =  "ESANN",\n   url_paper  =  "http://honeine.fr/paul/publi/15.esann.oneclass.pdf",\n   keywords  =  "machine learning, one-class, cybersecurity, computer crime, critical infrastructures, firewalls, learning (artificial intelligence), pattern classification, principal component analysis, radial basis function networks, SCADA systems, support vector machines, lp-norms, SCADA systems, information and communication technologies, supervisory control and data acquisition systems, cyberattacks heterogeneity, critical infrastructures, SCADA networks, systems vulnerabilities, intrusion detection systems, IDS, cyberattacks modeling, malicious intrusions detection, firewalls, machine learning, one-class classification algorithms, support vector data description, SVDD, kernel principle component analysis, radial basis function kernels, RBF kernels, bandwidth parameter, Kernel, Machine learning, SCADA systems, Intrusion detection, Optimization, Intrusion detection, kernel methods, ${\\mbi {l_p}}$ -norms, one-class classification, supervisory control and data acquisition (SCADA) systems, Mahalanobis distance",\n   abstract = "Machine learning techniques have been very popular in the past decade for their ability to detect hidden patterns in large volumes of data. Researchers have been developing online intrusion detection algorithms based on these techniques. In this paper, we propose an online one-class classification approach based on the Mahalanobis distance which takes into account the covariance in each feature direction and the different scaling of the coordinate axes. We define the one-class problem by two concentric hyperspheres enclosing the support vectors of the description. We update the classifier at each time step. The tests are conducted on real data.",\n}\n
\n
\n\n\n
\n Machine learning techniques have been very popular in the past decade for their ability to detect hidden patterns in large volumes of data. Researchers have been developing online intrusion detection algorithms based on these techniques. In this paper, we propose an online one-class classification approach based on the Mahalanobis distance which takes into account the covariance in each feature direction and the different scaling of the coordinate axes. We define the one-class problem by two concentric hyperspheres enclosing the support vectors of the description. We update the classifier at each time step. The tests are conducted on real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A new Bayesian unmixing algorithm for hyperspectral images mitigating endmember variability.\n \n \n \n \n\n\n \n Halimi, A.; Dobigeon, N.; Tourneret, J.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2469 - 2473, Brisbane, Australia, 19 - 24 April 2015. \n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{15.icassp.hype,\n   author =  "Abderrahim Halimi and Nicolas Dobigeon and Jean-Yves Tourneret and Paul Honeine",\n   title =  "A new {Bayesian} unmixing algorithm for hyperspectral images mitigating endmember variability",\n   booktitle =  "Proc. 40th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Brisbane, Australia",\n   month =  "19 - 24~" # apr,\n   year =  "2015",\n   pages = "2469 - 2473",\n   doi  = "10.1109/ICASSP.2015.7178415",\n   acronym  =  "ICASSP",\n   url_link= "https://ieeexplore.ieee.org/document/7178415",\n   url_paper   =  "http://honeine.fr/paul/publi/15.icassp.hype.pdf",\n   abstract={This paper presents an unsupervised Bayesian algorithm for hyperspectral image unmixing accounting for endmember variability. Each image pixel is modeled by a linear combination of random endmembers to take into account endmember variability in the image. The coefficients of this linear combination (referred to as abundances) allow the proportions of each material (endmembers) to be quantified in the image pixel. An additive noise is also considered in the proposed model generalizing the normal compositional model. The proposed Bayesian algorithm exploits spatial correlations between adjacent pixels of the image and provides spectral information by achieving a spectral unmixing. It estimates both the mean and the covariance matrix of each endmember in the image. A spatial classification is also obtained based on the estimated abundances. Simulations conducted with synthetic and real data show the potential of the proposed model and the unmixing performance for the analysis of hyperspectral images.}, \n   keywords={covariance matrices, hyperspectral imaging, image classification, image processing, Bayesian unmixing algorithm, hyperspectral images, endmember variability, Bayesian algorithm, hyperspectral image unmixing, image pixel, linear combination, additive noise, spatial correlations, spectral information, spectral unmixing, covariance matrix, spatial classification, Bayes methods, Hyperspectral imaging, Noise, Correlation, Monte Carlo methods, Hyperspectral imagery, endmember variability, image classification, Hamiltonian Monte-Carlo}, \n   ISSN={1520-6149},\n}\n
\n
\n\n\n
\n This paper presents an unsupervised Bayesian algorithm for hyperspectral image unmixing accounting for endmember variability. Each image pixel is modeled by a linear combination of random endmembers to take into account endmember variability in the image. The coefficients of this linear combination (referred to as abundances) allow the proportions of each material (endmembers) to be quantified in the image pixel. An additive noise is also considered in the proposed model generalizing the normal compositional model. The proposed Bayesian algorithm exploits spatial correlations between adjacent pixels of the image and provides spectral information by achieving a spectral unmixing. It estimates both the mean and the covariance matrix of each endmember in the image. A spatial classification is also obtained based on the estimated abundances. Simulations conducted with synthetic and real data show the potential of the proposed model and the unmixing performance for the analysis of hyperspectral images.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2014\n \n \n (17)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Detection of contamination in water distribution network.\n \n \n \n \n\n\n \n Noumir, Z.; Guépié, B. K.; Fillatre, L.; Honeine, P.; Nikiforov, I.; Snoussi, H.; Richard, C.; Jarrige, P.; and Campan, F.\n\n\n \n\n\n\n In Gourbesville, P.; Cunge, J.; and Caignaert, G., editor(s), Advances in Hydroinformatics, of Springer Hydrogeology, 12, pages 141 - 151. Springer Singapore, 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Detection link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{14.simhydro.chapter,\n   author =  "Zineb Noumir and Blaise Kévin Guépié and Lionel Fillatre and Paul Honeine and Igor Nikiforov and Hichem Snoussi and Cédric Richard and Pierre-Antoine Jarrige and Francis Campan",\n   title =  "Detection of contamination in water distribution network",\n   booktitle =  "Advances in Hydroinformatics",\n   editor={Gourbesville, Philippe and Cunge, Jean and Caignaert, Guy},\n   Publisher = {Springer Singapore},\n   series = "Springer Hydrogeology",\n   chapter = "12",\n   isbn =  "978-981-4451-41-3",\n   Pages = {141 - 151},\n   year  =  "2014",\n   url_link = {http://dx.doi.org/10.1007/978-981-4451-42-0_12},\n   doi={10.1007/978-981-4451-42-0_12},\n   keywords =  "machine learning, sparsity, adaptive filtering, one-class, cybersecurity, drinking water, water pollution, parametric detection, one-class classification, field experiment, sensor",\n   acronym =  "Hydro",\n   abstract = "Monitoring drinking water is an important public health problem because the safe drinking water is essential for human life. Many procedures have been developed for monitoring water quality in water treatment plants for years. Monitoring of water distribution systems has received less attention. The goal of this communication is to study the problem of drinking water safety by ensuring the monitoring of the distribution network from water plant to customers. The system is based on the observation of residual chlorine concentrations which are provided by the sensor network. The complexity of the detection problem is due to the water distribution network complexity and dynamic profiles of water consumptions. The onset time and geographic location of water contamination are unknown. Its duration is also unknown but finite. Moreover, the residual chlorine concentrations, which are modified by the contamination, are also time dependent since they are functions of water consumptions Two approaches for detection are presented. The first one, namely the parametric approach, exploits the hydraulic model to compute the nominal residual chlorine concentrations. The second one, namely the nonparametric approach, is a statistical methodology exploiting historical data. Finally, the probable area of introduction of the pollutant and the propagation of the pollution are computed and displayed to operational users.",\n}%url_paper  =  "http://honeine.fr/paul/publi/13.simhydro.chapter.pdf",\n
\n
\n\n\n
\n Monitoring drinking water is an important public health problem because the safe drinking water is essential for human life. Many procedures have been developed for monitoring water quality in water treatment plants for years. Monitoring of water distribution systems has received less attention. The goal of this communication is to study the problem of drinking water safety by ensuring the monitoring of the distribution network from water plant to customers. The system is based on the observation of residual chlorine concentrations which are provided by the sensor network. The complexity of the detection problem is due to the water distribution network complexity and dynamic profiles of water consumptions. The onset time and geographic location of water contamination are unknown. Its duration is also unknown but finite. Moreover, the residual chlorine concentrations, which are modified by the contamination, are also time dependent since they are functions of water consumptions Two approaches for detection are presented. The first one, namely the parametric approach, exploits the hydraulic model to compute the nominal residual chlorine concentrations. The second one, namely the nonparametric approach, is a statistical methodology exploiting historical data. Finally, the probable area of introduction of the pollutant and the propagation of the pollution are computed and displayed to operational users.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Target tracking using machine learning and Kalman filter in wireless sensor networks.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n IEEE Sensors Journal, 14(10): 3715 - 3725. October 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Target paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{14.wsn_kalman,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Target tracking using machine learning and {Kalman} filter in wireless sensor networks",\n   journal =  "IEEE Sensors Journal",\n   year  =  "2014",\n   volume =  "14",\n   number =  "10",\n   pages =  "3715 - 3725",\n   month =  oct,\n   doi= "10.1109/JSEN.2014.2332098", \n   url_paper  =  "http://www.honeine.fr/paul/publi/14.wsn_kalman.pdf",\n   keywords  =  "machine learning, wireless sensor networks, Kalman filters, learning (artificial intelligence), target tracking, wireless sensor networks, vector-output regularized least squares, kernel-based ridge regression, machine learning algorithms, received signal strength indicators, radio-fingerprints, wireless sensor networks, Kalman filter, target tracking, Acceleration, Vectors, Target tracking, Sensors, Covariance matrices, Mathematical model, State-space methods, Radio-fingerprinting, Kalman filter, machine learning, RSSI, target tracking, wireless sensor networks",\n   abstract={This paper describes an original method for target tracking in wireless sensor networks. The proposed method combines machine learning with a Kalman filter to estimate instantaneous positions of a moving target. The target's accelerations, along with information from the network, are used to obtain an accurate estimation of its position. To this end, radio-fingerprints of received signal strength indicators (RSSIs) are first collected over the surveillance area. The obtained database is then used with machine learning algorithms to compute a model that estimates the position of the target using only RSSI information. This model leads to a first position estimate of the target under investigation. The kernel-based ridge regression and the vector-output regularized least squares are used in the learning process. The Kalman filter is used afterward to combine predictions of the target's positions based on acceleration information with the first estimates, leading to more accurate ones. The performance of the method is studied for different scenarios and a thorough comparison with well-known algorithms is also provided.}, \n}\n
\n
\n\n\n
\n This paper describes an original method for target tracking in wireless sensor networks. The proposed method combines machine learning with a Kalman filter to estimate instantaneous positions of a moving target. The target's accelerations, along with information from the network, are used to obtain an accurate estimation of its position. To this end, radio-fingerprints of received signal strength indicators (RSSIs) are first collected over the surveillance area. The obtained database is then used with machine learning algorithms to compute a model that estimates the position of the target using only RSSI information. This model leads to a first position estimate of the target under investigation. The kernel-based ridge regression and the vector-output regularized least squares are used in the learning process. The Kalman filter is used afterward to combine predictions of the target's positions based on acceleration information with the first estimates, leading to more accurate ones. The performance of the method is studied for different scenarios and a thorough comparison with well-known algorithms is also provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Variants of non-negative least-mean-square algorithm and convergence analysis.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Bermudez, J. C. M.; and Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Signal Processing, 62(15): 3990 - 4005. August 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Variants paper\n  \n \n \n \"Variants code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{14.nnlms,\n   author =  "Jie Chen and Cédric Richard and José C. M. Bermudez and Paul Honeine",\n   title =  "Variants of non-negative least-mean-square algorithm and convergence analysis",\n   journal =  "IEEE Transactions on Signal Processing",\n   year  =  "2014",\n   volume =  "62",\n   number =  "15",\n   pages =  "3990 - 4005",\n   month =  aug,\n   doi="10.1109/TSP.2014.2332440", \n   url_paper  =  "http://www.honeine.fr/paul/publi/14.nnlms.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/14.nnlms.zip",\n   keywords  =  "non-negativity, adaptive filtering, convergence, least mean squares methods, stochastic processes, Wiener filters, nonnegative least-mean-square algorithm, convergence analysis, parameter estimation, NNLMS, Wiener filtering problem, computational cost, stochastic behavior, adaptive weights, nonstationary environments, Gaussian inputs, Signal processing algorithms, Convergence, Algorithm design and analysis, Prediction algorithms, Vectors, Equations, Estimation, Adaptive signal processing, convergence analysis, exponential algorithm, least-mean-square algorithms, non-negativity constraints, normalized algorithm, sign-sign algorithm",\n   abstract={Due to the inherent physical characteristics of systems under investigation, non-negativity is one of the most interesting constraints that can usually be imposed on the parameters to estimate. The Non-Negative Least-Mean-Square algorithm (NNLMS) was proposed to adaptively find solutions of a typical Wiener filtering problem but with the side constraint that the resulting weights need to be non-negative. It has been shown to have good convergence properties. Nevertheless, certain practical applications may benefit from the use of modified versions of this algorithm. In this paper, we derive three variants of NNLMS. Each variant aims at improving the NNLMS performance regarding one of the following aspects: sensitivity of input power, unbalance of convergence rates for different weights and computational cost. We study the stochastic behavior of the adaptive weights for these three new algorithms for non-stationary environments. This study leads to analytical models to predict the first and second order moment behaviors of the weights for Gaussian inputs. Simulation results are presented to illustrate the performance of the new algorithms and the accuracy of the derived models.}, \n}\n
\n
\n\n\n
\n Due to the inherent physical characteristics of systems under investigation, non-negativity is one of the most interesting constraints that can usually be imposed on the parameters to estimate. The Non-Negative Least-Mean-Square algorithm (NNLMS) was proposed to adaptively find solutions of a typical Wiener filtering problem but with the side constraint that the resulting weights need to be non-negative. It has been shown to have good convergence properties. Nevertheless, certain practical applications may benefit from the use of modified versions of this algorithm. In this paper, we derive three variants of NNLMS. Each variant aims at improving the NNLMS performance regarding one of the following aspects: sensitivity of input power, unbalance of convergence rates for different weights and computational cost. We study the stochastic behavior of the adaptive weights for these three new algorithms for non-stationary environments. This study leads to analytical models to predict the first and second order moment behaviors of the weights for Gaussian inputs. Simulation results are presented to illustrate the performance of the new algorithms and the accuracy of the derived models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n $\\ell_p$-norms in One-Class Classification for Intrusion Detection in SCADA Systems.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n IEEE Transactions on Industrial Informatics, 10(4): 2308 - 2317. November 2014.\n \n\n\n\n
\n\n\n\n \n \n \"$\\ell_p$-norms paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{14.scada_oneclass,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "$\\ell_p$-norms in One-Class Classification for Intrusion Detection in SCADA Systems",\n   journal =  "IEEE Transactions on Industrial Informatics",\n   year  =  "2014",\n   month= nov, \n   volume={10}, \n   number={4}, \n   pages="2308 - 2317", \n   doi="10.1109/TII.2014.2330796", \n   url_paper  =  "http://www.honeine.fr/paul/publi/14.one_class.pdf",\n   keywords  =  "machine learning, one-class, cybersecurity, computer crime, critical infrastructures, firewalls, learning (artificial intelligence), pattern classification, principal component analysis, radial basis function networks, SCADA systems, support vector machines, lp-norms, SCADA systems, information and communication technologies, supervisory control and data acquisition systems, cyberattacks heterogeneity, critical infrastructures, SCADA networks, systems vulnerabilities, intrusion detection systems, IDS, cyberattacks modeling, malicious intrusions detection, firewalls, machine learning, one-class classification algorithms, support vector data description, SVDD, kernel principle component analysis, radial basis function kernels, RBF kernels, bandwidth parameter, Kernel, Machine learning, SCADA systems, Intrusion detection, Optimization, Intrusion detection, kernel methods, ${\\mbi {l_p}}$ -norms, one-class classification, supervisory control and data acquisition (SCADA) systems",\n   abstract={The massive use of information and communication technologies in supervisory control and data acquisition (SCADA) systems opens new ways for carrying out cyberattacks against critical infrastructures relying on SCADA networks. The various vulnerabilities in these systems and the heterogeneity of cyberattacks make the task extremely difficult for traditional intrusion detection systems (IDS). Modeling cyberattacks has become nearly impossible and their potential consequences may be very severe. The primary objective of this work is to detect malicious intrusions once they have already bypassed traditional IDS and firewalls. This paper investigates the use of machine learning for intrusion detection in SCADA systems using one-class classification algorithms. Two approaches of one-class classification are investigated: 1) the support vector data description (SVDD); and 2) the kernel principle component analysis. The impact of the considered metric is examined in detail with the study of lp-norms in radial basis function (RBF) kernels. A heuristic is proposed to find an optimal choice of the bandwidth parameter in these kernels. Tests are conducted on real data with several types of cyberattacks.},\n}\n
\n
\n\n\n
\n The massive use of information and communication technologies in supervisory control and data acquisition (SCADA) systems opens new ways for carrying out cyberattacks against critical infrastructures relying on SCADA networks. The various vulnerabilities in these systems and the heterogeneity of cyberattacks make the task extremely difficult for traditional intrusion detection systems (IDS). Modeling cyberattacks has become nearly impossible and their potential consequences may be very severe. The primary objective of this work is to detect malicious intrusions once they have already bypassed traditional IDS and firewalls. This paper investigates the use of machine learning for intrusion detection in SCADA systems using one-class classification algorithms. Two approaches of one-class classification are investigated: 1) the support vector data description (SVDD); and 2) the kernel principle component analysis. The impact of the considered metric is examined in detail with the study of lp-norms in radial basis function (RBF) kernels. A heuristic is proposed to find an optimal choice of the bandwidth parameter in these kernels. Tests are conducted on real data with several types of cyberattacks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear estimation of material abundances of hyperspectral images with $\\ell_1$-norm spatial regularization.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; and Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Geoscience and Remote Sensing, 52(5): 2654 - 2665. May 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Nonlinear link\n  \n \n \n \"Nonlinear paper\n  \n \n \n \"Nonlinear code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{14.tgrs.nonlinear,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine",\n   title =  "Nonlinear estimation of material abundances of hyperspectral images with $\\ell_1$-norm spatial regularization",\n   journal =  "IEEE Transactions on Geoscience and Remote Sensing",\n   year  =  "2014",\n   volume =  "52",\n   number =  "5",\n   pages =  "2654 - 2665",\n   month =  may,\n   doi="10.1109/TGRS.2013.2264392", \n   url_link  =  "http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=06531654",\n   url_paper  =  "http://www.honeine.fr/paul/publi/14.tgrs.nonlinear.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/14.tgrs.nonlinear.zip",\n   keywords  =  "hyperspectral, sparsity, correlation methods, geophysical image processing, hyperspectral imaging, material abundances nonlinear estimation, hyperspectral images, l1-norm spatial regularization, hyperspectral unmixing, fractional abundances estimation, spatial-spectral duality, linear mixing model, spatial correlation, Hyperspectral imaging, $\\ell_{1}$-norm regularization, nonlinear spectral unmixing, spatial regularization, Hyperspectral imaging, $\\ell_{1}$-norm regularization, nonlinear spectral unmixing, spatial regularization",\n   abstract={Integrating spatial information into hyperspectral unmixing procedures has been shown to have a positive effect on the estimation of fractional abundances due to the inherent spatial-spectral duality in hyperspectral scenes. However, current research works that take spatial information into account are mainly focused on the linear mixing model. In this paper, we investigate how to incorporate spatial correlation into a nonlinear abundance estimation process. A nonlinear unmixing algorithm operating in reproducing kernel Hilbert spaces, coupled with a l1-type spatial regularization, is derived. Experiment results, with both synthetic and real hyperspectral images, illustrate the effectiveness of the proposed scheme.}, \n}\n
\n
\n\n\n
\n Integrating spatial information into hyperspectral unmixing procedures has been shown to have a positive effect on the estimation of fractional abundances due to the inherent spatial-spectral duality in hyperspectral scenes. However, current research works that take spatial information into account are mainly focused on the linear mixing model. In this paper, we investigate how to incorporate spatial correlation into a nonlinear abundance estimation process. A nonlinear unmixing algorithm operating in reproducing kernel Hilbert spaces, coupled with a l1-type spatial regularization, is derived. Experiment results, with both synthetic and real hyperspectral images, illustrate the effectiveness of the proposed scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combining a physical model with a nonlinear fluctuation for signal propagation modeling in WSNs.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n In Proc. 11th IEEE/ACS International Conference on Computer Systems and Applications, pages 413-419, Doha, Qatar, 10-13 November 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Combining link\n  \n \n \n \"Combining paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.aiccsa.wsn,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Combining a physical model with a nonlinear fluctuation for signal propagation modeling in {WSNs}",\n   booktitle =  "Proc. 11th IEEE/ACS International Conference on Computer Systems and Applications",\n   address =  "Doha, Qatar",\n   year  =  "2014",\n   month =  "10-13~" # nov,\n   acronym =  "AICCSA",\n   pages={413-419}, \n   url_link= "https://ieeexplore.ieee.org/document/7073228",\n   url_paper  =  "http://honeine.fr/paul/publi/14.wsn.combine.pdf",\n   abstract={In this paper, we propose a semiparametric regression model that relates the received signal strength indicators (RSSIs) to the distances separating stationary sensors and moving sensors in a wireless sensor network. This model combines the well-known log-distance theoretical propagation model with a nonlinear fluctuation term, estimated within the framework of kernel-based machines. This leads to a more robust propagation model. A fully comprehensive study of the choices of parameters is provided, and a comparison to state-of-the-art models using real and simulated data is given as well.}, \n   keywords={distance measurement, radiowave propagation, regression analysis, RSSI, wireless sensor networks, semiparametric regression model, received signal strength indicators, RSSI, stationary sensors, moving sensors, wireless sensor network, log-distance theoretical propagation model, nonlinear fluctuation term, kernel-based machines, WSN, Sensors, Mathematical model, Data models, Training, Kernel, Polynomials, Computational modeling, Distance estimation, kernel functions, multikernel learning, RSSI, semiparametric regression}, \n   doi={10.1109/AICCSA.2014.7073228}, \n   ISSN={2161-5330}, \n}\n
\n
\n\n\n
\n In this paper, we propose a semiparametric regression model that relates the received signal strength indicators (RSSIs) to the distances separating stationary sensors and moving sensors in a wireless sensor network. This model combines the well-known log-distance theoretical propagation model with a nonlinear fluctuation term, estimated within the framework of kernel-based machines. This leads to a more robust propagation model. A fully comprehensive study of the choices of parameters is provided, and a comparison to state-of-the-art models using real and simulated data is given as well.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Role of One-Class Classification in Detecting Cyberattacks in Critical Infrastructures.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n In Panayiotou, C. G.; Ellinas, G.; Kyriakides, E.; and Polycarpou, M. M., editor(s), Proc. 9th International Conference on Critical Information Infrastructures Security, Limassol, Cyprus, 13 - 15 October 2014. Springer International Publishing\n \n\n\n\n
\n\n\n\n \n \n \"The paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.critis.oneclass,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "The Role of One-Class Classification in Detecting Cyberattacks in Critical Infrastructures",\n   booktitle =  "Proc. 9th International Conference on Critical Information Infrastructures Security",\n   editor="Panayiotou, Christos G. and Ellinas, Georgios and Kyriakides, Elias and Polycarpou, Marios M.",\n   address =  "Limassol, Cyprus",\n   publisher="Springer International Publishing",\n   year  =  "2014",\n   month =  "13 - 15~" # oct,\n   keywords  =  "machine learning, one-class, cybersecurity",\n   acronym =  "CRITIS",\n   url_paper  =  "http://honeine.fr/paul/publi/14.critis.oneclass.pdf",\n   abstract="The security of critical infrastructures has gained a lot of attention in the past few years with the growth of cyberthreats and the diversity of cyberattacks. Although traditional IDS update frequently their databases of known attacks, new complex attacks are generated everyday to circumvent security systems and to make their detection nearly impossible. This paper outlines the importance of one-class classification algorithms in detecting malicious cyberattacks in critical infrastructures. The role of machine learning algorithms is complementary to IDS and firewalls, and the objective of this work is to detect intentional intrusions once they have already bypassed these security systems. Two approaches are investigated, Support Vector Data Description and Kernel Principal Component Analysis. The impact of the metric in kernels is investigated, and a heuristic for choosing the bandwidth parameter is proposed. Tests are conducted on real data with several types of cyberattacks.",\n}\n
\n
\n\n\n
\n The security of critical infrastructures has gained a lot of attention in the past few years with the growth of cyberthreats and the diversity of cyberattacks. Although traditional IDS update frequently their databases of known attacks, new complex attacks are generated everyday to circumvent security systems and to make their detection nearly impossible. This paper outlines the importance of one-class classification algorithms in detecting malicious cyberattacks in critical infrastructures. The role of machine learning algorithms is complementary to IDS and firewalls, and the objective of this work is to detect intentional intrusions once they have already bypassed these security systems. Two approaches are investigated, Support Vector Data Description and Kernel Principal Component Analysis. The impact of the metric in kernels is investigated, and a heuristic for choosing the bandwidth parameter is proposed. Tests are conducted on real data with several types of cyberattacks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel non-negative matrix factorization without the pre-image problem.\n \n \n \n \n\n\n \n Zhu, F.; Honeine, P.; and Kallas, M.\n\n\n \n\n\n\n In Proc. 24th IEEE workshop on Machine Learning for Signal Processing (MLSP), pages 1 - 6, Reims, France, 21 - 24 September 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Kernel link\n  \n \n \n \"Kernel paper\n  \n \n \n \"Kernel code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.mlsp.nmf,\n   author =  "Fei Zhu and Paul Honeine and Maya Kallas",\n   title =  "Kernel non-negative matrix factorization without the pre-image problem",\n   booktitle =  "Proc. 24th IEEE workshop on Machine Learning for Signal Processing (MLSP)",\n   address =  "Reims, France",\n   year  =  "2014",\n   month =  "21 - 24~" # sep,\n   pages = "1 - 6",\n   doi={10.1109/MLSP.2014.6958910}, \n   ISSN={1551-2541}, \n   acronym =  "MLSP",\n   url_link= "https://ieeexplore.ieee.org/document/6958910",\n   url_paper   =  "http://honeine.fr/paul/publi/14.mlsp.nmf.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/16.kernelNMF.rar",\n   abstract={The nonnegative matrix factorization (NMF) is widely used in signal and image processing, including bio-informatics, blind source separation and hyperspectral image analysis in remote sensing. A great challenge arises when dealing with nonlinear NMF. In this paper, we propose an efficient nonlinear NMF, which is based on kernel machines. As opposed to previous work, the proposed method does not suffer from the pre-image problem. We propose two iterative algorithms: an additive and a multiplicative update rule. Several extensions of the kernel-NMF are developed in order to take into account auxiliary structural constraints, such as smoothness, sparseness and spatial regularization. The relevance of the presented techniques is demonstrated in unmixing a synthetic hyperspectral image.}, \n   keywords={hyperspectral imaging, image processing, iterative methods, matrix decomposition, kernel nonnegative matrix factorization, kernel-NMF, signal processing, image processing, bioinformatics, blind source separation, hyperspectral image analysis, remote sensing, nonlinear NMF, kernel machines, iterative algorithms, additive update rule, multiplicative update rule, auxiliary structural constraints, synthetic hyperspectral image unmixing, Kernel, Additives, Hyperspectral imaging, Vectors, Polynomials, Estimation, Kernel machines, nonnegative matrix factorization, reproducing kernel Hilbert space, pre-image problem, unmixing problem, hyperspectral data}, \n   ISSN={1551-2541}, \n}\n
\n
\n\n\n
\n The nonnegative matrix factorization (NMF) is widely used in signal and image processing, including bio-informatics, blind source separation and hyperspectral image analysis in remote sensing. A great challenge arises when dealing with nonlinear NMF. In this paper, we propose an efficient nonlinear NMF, which is based on kernel machines. As opposed to previous work, the proposed method does not suffer from the pre-image problem. We propose two iterative algorithms: an additive and a multiplicative update rule. Several extensions of the kernel-NMF are developed in order to take into account auxiliary structural constraints, such as smoothness, sparseness and spatial regularization. The relevance of the presented techniques is demonstrated in unmixing a synthetic hyperspectral image.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mahalanobis-Based One-Class Classification.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n In Proc. 24th IEEE workshop on Machine Learning for Signal Processing (MLSP), pages 1 - 6, Reims, France, 21 - 24 September 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Mahalanobis-Based link\n  \n \n \n \"Mahalanobis-Based paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.mlsp.oneclass,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "Mahalanobis-Based One-Class Classification",\n   booktitle =  "Proc. 24th IEEE workshop on Machine Learning for Signal Processing (MLSP)",\n   address =  "Reims, France",\n   year  =  "2014",\n   month =  "21 - 24~" # sep,\n   pages = "1 - 6",\n   acronym =  "MLSP",\n   url_link= "https://ieeexplore.ieee.org/document/6958934",\n   url_paper   =  "http://honeine.fr/paul/publi/14.mlsp.oneclass.pdf",\n   abstract={Machine learning techniques have become very popular in the past decade for detecting nonlinear relations in large volumes of data. In particular, one-class classification algorithms have gained the interest of the researchers when the available samples in the training set refer to a unique/single class. In this paper, we propose a simple one-class classification approach based on the Mahalanobis distance. We make use of the advantages of kernel whitening and KPCA in order to compute the Mahalanobis distance in the feature space, by projecting the data into the subspace spanned by the most relevant eigenvectors of the covariance matrix. We also propose a sparse formulation of this approach. The tests are conducted on simulated data as well as on real data.}, \n   keywords={machine learning, one-class, cybersecurity, covariance matrices, eigenvalues and eigenfunctions, learning (artificial intelligence), pattern classification, Mahalanobis-based one-class classification, machine learning techniques, nonlinear relation detection, Mahalanobis distance, kernel whitening, KPCA, feature space, covariance matrix eigenvectors, Kernel, Training, Covariance matrices, Support vector machines, Pipelines, Matrix decomposition, Eigenvalues and eigenfunctions, Kernel methods, one-class classification, Mahalanobis distance}, \n   doi={10.1109/MLSP.2014.6958934}, \n   ISSN={1551-2541}, \n}\n
\n
\n\n\n
\n Machine learning techniques have become very popular in the past decade for detecting nonlinear relations in large volumes of data. In particular, one-class classification algorithms have gained the interest of the researchers when the available samples in the training set refer to a unique/single class. In this paper, we propose a simple one-class classification approach based on the Mahalanobis distance. We make use of the advantages of kernel whitening and KPCA in order to compute the Mahalanobis distance in the feature space, by projecting the data into the subspace spanned by the most relevant eigenvectors of the covariance matrix. We also propose a sparse formulation of this approach. The tests are conducted on simulated data as well as on real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Diffusion Strategies For In-Network Principal Component Analysis.\n \n \n \n \n\n\n \n Ghadban, N.; Honeine, P.; Mourad-Chehade, F.; Francis, C.; and Farah, J.\n\n\n \n\n\n\n In Proc. 24th IEEE workshop on Machine Learning for Signal Processing (MLSP), pages 1 - 6, Reims, France, 21 - 24 September 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Diffusion link\n  \n \n \n \"Diffusion paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.mlsp.pca,\n   author =  "Nisrine Ghadban and Paul Honeine and Farah Mourad-Chehade and Clovis Francis and Joumana Farah",\n   title =  "Diffusion Strategies For In-Network Principal Component Analysis",\n   booktitle =  "Proc. 24th IEEE workshop on Machine Learning for Signal Processing (MLSP)",\n   address =  "Reims, France",\n   year  =  "2014",\n   month =  "21 - 24~" # sep,\n   pages = "1 - 6",\n   acronym =  "MLSP",\n   url_link= "https://ieeexplore.ieee.org/document/6958849",\n   url_paper   =  "http://honeine.fr/paul/publi/14.mlsp.pca.pdf",\nabstract={This paper deals with the principal component analysis in networks, where it is improper to compute the sample covariance matrix. To this end, we derive several in-network strategies to estimate the principal axes, including noncooperative and cooperative (diffusion-based) strategies. The performance of the proposed strategies is illustrated on diverse applications, including image processing and dimensionality reduction of time series in wireless sensor networks.}, \n   keywords={covariance matrices, principal component analysis, unsupervised learning, in-network principal component analysis, covariance matrix, cooperative diffusion-based strategy, Principal component analysis, Covariance matrices, Cost function, Wireless sensor networks, Time series analysis, Eigenvalues and eigenfunctions, Convergence, Principal component analysis, network, adaptive learning, distributed processing}, \n   doi={10.1109/MLSP.2014.6958849}, \n   ISSN={1551-2541}, \n}\n
\n
\n\n\n
\n This paper deals with the principal component analysis in networks, where it is improper to compute the sample covariance matrix. To this end, we derive several in-network strategies to estimate the principal axes, including noncooperative and cooperative (diffusion-based) strategies. The performance of the proposed strategies is illustrated on diverse applications, including image processing and dimensionality reduction of time series in wireless sensor networks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Strategies for principal component analysis in wireless sensor networks.\n \n \n \n \n\n\n \n Ghadban, N.; Honeine, P.; Francis, C.; Mourad-Chehade, F.; and Farah, J.\n\n\n \n\n\n\n In Proc. eighth IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), pages 233-236, A Coruna, Spain, 22 - 25 June 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Strategies link\n  \n \n \n \"Strategies paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.sam.pca,\n   author =  "Nisrine Ghadban and Paul Honeine and Clovis Francis and Farah Mourad-Chehade and Joumana Farah",\n   title =  "Strategies for principal component analysis in wireless sensor networks",\n   booktitle =  "Proc. eighth IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM)",\n   address =  "A Coruna, Spain",\n   year  =  "2014",\n   month =  "22 - 25~" # jun,\n   keywords  =  "machine learning, wireless sensor networks",\n   acronym =  "SAM",\n   pages={233-236}, \n   url_link= "https://ieeexplore.ieee.org/document/6882383",\n   url_paper   =  "http://honeine.fr/paul/publi/14.sam.pca.pdf",\n   abstract={This paper deals with the issue of monitoring physical phenomena using wireless sensor networks. It provides principal component analysis for the time series of sensors' measurements. Without the need to compute the sample covariance matrix, we derive several in-network strategies to estimate the principal axis, including noncooperative and diffusion strategies. The performance of the proposed strategies is illustrated in the issue of monitoring gas diffusion.}, \n   keywords={covariance matrices, principal component analysis, time series, wireless sensor networks, principal component analysis, wireless sensor networks, time series, covariance matrix, diffusion strategy, noncooperative strategy, gas diffusion monitoring, Wireless sensor networks, Principal component analysis, Covariance matrices, Temperature measurement, Convergence, Time series analysis, Pollution measurement, Principal component analysis, wireless sensor network, adaptive learning, distributed processing}, \n   doi={10.1109/SAM.2014.6882383}, \n   ISSN={2151-870X}, \n}\n
\n
\n\n\n
\n This paper deals with the issue of monitoring physical phenomena using wireless sensor networks. It provides principal component analysis for the time series of sensors' measurements. Without the need to compute the sample covariance matrix, we derive several in-network strategies to estimate the principal axis, including noncooperative and diffusion strategies. The performance of the proposed strategies is illustrated in the issue of monitoring gas diffusion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ridge regression and Kalman filtering for target tracking in wireless sensor networks.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n In Proc. eighth IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), pages 237-240, A Coruna, Spain, 22 - 25 June 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Ridge link\n  \n \n \n \"Ridge paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.sam.kalman,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Ridge regression and {Kalman} filtering for target tracking in wireless sensor networks",\n   booktitle =  "Proc. eighth IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM)",\n   address =  "A Coruna, Spain",\n   year  =  "2014",\n   month =  "22 - 25~" # jun,\n   acronym =  "SAM",\n   pages = {237-240}, \n   url_link= "https://ieeexplore.ieee.org/document/6882384",\n   url_paper   =  "http://honeine.fr/paul/publi/14.sam.kalman.pdf",\n   abstract={This paper introduces an original method for target tracking in wireless sensor networks that combines machine learning and Kalman filtering. A database of radio-fingerprints is used, along with the ridge regression learning method, to compute a model that takes as input RSSI information, and yields, as output, the positions where the RSSIs are measured. This model leads to a position estimate for each target. The Kalman filter is used afterwards to combine the model's estimates with predictions of the target's positions based on acceleration information, leading to more accurate ones.}, \n   keywords={filtering theory, Kalman filters, learning (artificial intelligence), regression analysis, target tracking, telecommunication computing, wireless sensor networks, Kalman filtering, target tracking, wireless sensor networks, machine learning, radio-fingerprints database, ridge regression learning method, input RSSI information, acceleration information, position estimation, Kalman filters, Target tracking, Wireless sensor networks, Acceleration, Vectors, Noise, Computational modeling, radio-fingerprinting, Kalman filter, ridge regression, RSSI, tracking, WSN}, \n   doi={10.1109/SAM.2014.6882384}, \n   ISSN={2151-870X}, \n}\n
\n
\n\n\n
\n This paper introduces an original method for target tracking in wireless sensor networks that combines machine learning and Kalman filtering. A database of radio-fingerprints is used, along with the ridge regression learning method, to compute a model that takes as input RSSI information, and yields, as output, the positions where the RSSIs are measured. This model leads to a position estimate for each target. The Kalman filter is used afterwards to combine the model's estimates with predictions of the target's positions based on acceleration information, leading to more accurate ones.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral partitioning and fusion techniques for hyperspectral data classification and unmixing.\n \n \n \n \n\n\n \n Ammanouil, R.; Melhem, J. A.; Farah, J.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 6th International Symposium on Communications, Control, and Signal Processing (ISCCSP), pages 550-553, Athens, Greece, 21 - 23 May 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Spectral link\n  \n \n \n \"Spectral paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.isccsp.hype,\n   author =  "Rita Ammanouil and Jean Abou Melhem and Joumana Farah and Paul Honeine",\n   title =  "Spectral partitioning and fusion techniques for hyperspectral data classification and unmixing",\n   booktitle =  "Proc. 6th International Symposium on Communications, Control, and Signal Processing (ISCCSP)",\n   address =  "Athens, Greece",\n   year  =  "2014",\n   month =  "21 - 23~" # may,\n   pages={550-553}, \n   acronym =  "ISCCSP",\n   url_link= "https://ieeexplore.ieee.org/document/6877934",\n   url_paper   =  "http://honeine.fr/paul/publi/14.isccsp.hype.pdf",\n   abstract={Hyperspectral images are characterized by their large contiguous set of wavelengths. Therefore, it is possible to benefit from this `hyper' spectral information in order to reduce the classification and unmixing errors. For this reason, we propose new classification and unmixing techniques that take into account the correlation between successive spectral bands, by dividing the spectrum into non-overlapping subsets of correlated bands. Afterwards, classification and unmixing are performed on each subset separately, such as to yield several labels per pixel in the classification case, or abundances in the unmixing case. Then, several fusion techniques are proposed to obtain the final decision. Results show that spectral partitioning and appropriate fusion allow a significant gain in performance compared to previous classification and unmixing techniques.}, \n   keywords={correlation methods, geophysical image processing, hyperspectral imaging, image classification, image fusion, spectral partitioning technique, spectral fusion technique, hyperspectral data classification technique, hyperspectral data unmixing technique, hyperspectral imaging, hyperspectral information, successive spectral band correlation, nonoverlapping subset spectrum, Correlation, Hyperspectral imaging, Classification algorithms, Educational institutions, Measurement, Hyperspectral imaging, spectral preprocessing, classification, unmixing, fusion}, \n   doi={10.1109/ISCCSP.2014.6877934}, \n}\n
\n
\n\n\n
\n Hyperspectral images are characterized by their large contiguous set of wavelengths. Therefore, it is possible to benefit from this `hyper' spectral information in order to reduce the classification and unmixing errors. For this reason, we propose new classification and unmixing techniques that take into account the correlation between successive spectral bands, by dividing the spectrum into non-overlapping subsets of correlated bands. Afterwards, classification and unmixing are performed on each subset separately, such as to yield several labels per pixel in the classification case, or abundances in the unmixing case. Then, several fusion techniques are proposed to obtain the final decision. Results show that spectral partitioning and appropriate fusion allow a significant gain in performance compared to previous classification and unmixing techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mobility using first and second derivatives for kernel-based regression in wireless sensor networks.\n \n \n \n \n\n\n \n Ghadban, N.; Honeine, P.; Mourad-Chehade, F.; Francis, C.; and Farah, J.\n\n\n \n\n\n\n In Proc. 21st International Conference on Systems, Signals and Image Processing, pages 203-206, Dubrovnik, Croatia, 12 - 15 May 2014. \n \n\n\n\n
\n\n\n\n \n \n \"Mobility link\n  \n \n \n \"Mobility paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.iwssip.wsn,\n   author =  "Nisrine Ghadban and Paul Honeine and Farah Mourad-Chehade and Clovis Francis and  Joumana Farah",\n   title =  "Mobility using first and second derivatives for kernel-based regression in wireless sensor networks",\n   booktitle =  "Proc. 21st International Conference on Systems, Signals and Image Processing",\n   address =  "Dubrovnik, Croatia",\n   year  =  "2014",\n   month =  "12 - 15~" # may,\n   acronym =  "IWSSIP",\n   pages={203-206}, \n   url_link= "https://ieeexplore.ieee.org/document/6837666",\n   url_paper   =  "http://honeine.fr/paul/publi/14.iwssip.wsn.pdf",\n   abstract={This paper deals with the problem of tracking and monitoring physical phenomena using wireless sensor networks. It proposes an original mobility scheme that aims at improving the tracking process. To this end, a model is defined using kernel-based methods and a learning process. The sensors are given the ability to move in a manner that minimizes the approximation error, and thus improves the efficiency of the model. First and second derivatives of the approximation error are used to define the new positions of the nodes. The performance of the proposed method is illustrated in the context of monitoring gas diffusion with wireless sensor networks.}, \n   keywords={regression analysis, wireless sensor networks, kernel-based regression, wireless sensor networks, original mobility scheme, learning process, approximation error, gas diffusion monitoring, second derivatives, first derivatives, Robot sensing systems, Mathematical model, Monitoring, Power measurement, Vectors}, \n   ISSN={2157-8672}, \n}\n
\n
\n\n\n
\n This paper deals with the problem of tracking and monitoring physical phenomena using wireless sensor networks. It proposes an original mobility scheme that aims at improving the tracking process. To this end, a model is defined using kernel-based methods and a learning process. The sensors are given the ability to move in a manner that minimizes the approximation error, and thus improves the efficiency of the model. First and second derivatives of the approximation error are used to define the new positions of the nodes. The performance of the proposed method is illustrated in the context of monitoring gas diffusion with wireless sensor networks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Démélange non-linéaire d'images hyperspectrales : mythe ou réalité ?.\n \n \n \n\n\n \n Chen, J.; Dobigeon, N.; Halimi, A.; Honeine, P.; Richard, C.; and Tourneret, J.\n\n\n \n\n\n\n In 3-ème colloque scientifique de la SFPT-GH, Porquerolles, France, 15 - 16 May 2014. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{14.hype.colloque,\n   author =  "Jie Chen and Nicolas Dobigeon and Abderrahim Halimi and Paul Honeine and Cédric Richard and Jean-Yves Tourneret",\n   title =  "Démélange non-linéaire d'images hyperspectrales : mythe ou réalité ?",\n   booktitle =  "3-ème colloque scientifique de la SFPT-GH",\n   address =  "Porquerolles, France",\n   year =  "2014",\n   month =  "15 - 16~" # may,\n   acronym =  "Colloque",\n   keywords =  "hyperspectral, machine learning, Bayesian inference",\n}\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel nonnegative matrix factorization without the curse of the pre-image.\n \n \n \n \n\n\n \n Zhu, F.; Honeine, P.; and Kallas, M.\n\n\n \n\n\n\n Technical Report ArXiv, July 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Kernel paper\n  \n \n \n \"Kernel code\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@TECHREPORT{14.knmf,\n   author =  "Fei Zhu and Paul Honeine and Maya Kallas",\n   title =  "Kernel nonnegative matrix factorization without the curse of the pre-image",\n   institution =  "ArXiv",\n   year  =  "2014",\n   volume =  "1",\n   pages =  "1-14",\n   month =  jul,\n   url_paper   =  "http://honeine.fr/paul/publi/14.kernel_NMF.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/16.kernelNMF.rar",\n   abstract = "The nonnegative matrix factorization (NMF) is widely used in signal and image processing, including bio-informatics, blind source separation and hyperspectral image analysis in remote sensing. A great challenge arises when dealing with a nonlinear formulation of the NMF. Within the framework of kernel machines, the models suggested in the literature do not allow the representation of the factorization matrices, which is a fallout of the curse of the pre-image. In this paper, we propose a novel kernel-based model for the NMF that does not suffer from the pre-image problem, by investigating the estimation of the factorization matrices directly in the input space. For different kernel functions, we describe two schemes for iterative algorithms: an additive update rule based on a gradient descent scheme and a multiplicative update rule in the same spirit as in the Lee and Seung algorithm. Within the proposed framework, we develop several extensions to incorporate constraints, including sparseness, smoothness, and spatial regularization with a total-variation-like penalty. The effectiveness of the proposed method is demonstrated with the problem of unmixing hyperspectral images, using well-known real images and results with state-of-the-art techniques.",\n   keywords =  "machine learning, hyperspectral, kernel machines, nonnegative matrix factorization, reproducing kernel Hilbert space, pre-image problem, hyperspectral\nimage, unmixing problem",\n}\n\n
\n
\n\n\n
\n The nonnegative matrix factorization (NMF) is widely used in signal and image processing, including bio-informatics, blind source separation and hyperspectral image analysis in remote sensing. A great challenge arises when dealing with a nonlinear formulation of the NMF. Within the framework of kernel machines, the models suggested in the literature do not allow the representation of the factorization matrices, which is a fallout of the curse of the pre-image. In this paper, we propose a novel kernel-based model for the NMF that does not suffer from the pre-image problem, by investigating the estimation of the factorization matrices directly in the input space. For different kernel functions, we describe two schemes for iterative algorithms: an additive update rule based on a gradient descent scheme and a multiplicative update rule in the same spirit as in the Lee and Seung algorithm. Within the proposed framework, we develop several extensions to incorporate constraints, including sparseness, smoothness, and spatial regularization with a total-variation-like penalty. The effectiveness of the proposed method is demonstrated with the problem of unmixing hyperspectral images, using well-known real images and results with state-of-the-art techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Entropy of Overcomplete Kernel Dictionaries.\n \n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n Technical Report arXiv:1411.0161, ArXiv, November 2014.\n \n\n\n\n
\n\n\n\n \n \n \"Entropy paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@TECHREPORT{15.sparse.entropy,\n   author =  "Paul Honeine",\n   title =  "Entropy of Overcomplete Kernel Dictionaries",\n   institution =  "ArXiv",\n   year  =  "2014",\n   volume =  "",\n   number =  "arXiv:1411.0161",\n   pages =  "1 - 10",\n   month =  nov,\n   url_paper   =  " http://arxiv.org/abs/1411.0161",\n   ABSTRACT = "In signal analysis and synthesis, linear approximation theory considers a linear decomposition of any given signal in a set of atoms, collected into a so-called dictionary. Relevant sparse representations are obtained by relaxing the orthogonality condition of the atoms, yielding overcomplete dictionaries with an extended number of atoms. More generally than the linear decomposition, overcomplete kernel dictionaries provide an elegant nonlinear extension by defining the atoms through a mapping kernel function (e.g., the gaussian kernel). Models based on such kernel dictionaries are used in neural networks, gaussian processes and online learning with kernels. \nThe quality of an overcomplete dictionary is evaluated with a diversity measure the distance, the approximation, the coherence and the Babel measures. In this paper, we develop a framework to examine overcomplete kernel dictionaries with the entropy from information theory. Indeed, a higher value of the entropy is associated to a further uniform spread of the atoms over the space. For each of the aforementioned diversity measures, we derive lower bounds on the entropy. Several definitions of the entropy are examined, with an extensive analysis in both the input space and the mapped feature space.",\n   keywords  =  "machine learning, sparsity, adaptive filtering",\n}%"http://honeine.fr/paul/publi/15.sparse.entropy.pdf"\n\n
\n
\n\n\n
\n In signal analysis and synthesis, linear approximation theory considers a linear decomposition of any given signal in a set of atoms, collected into a so-called dictionary. Relevant sparse representations are obtained by relaxing the orthogonality condition of the atoms, yielding overcomplete dictionaries with an extended number of atoms. More generally than the linear decomposition, overcomplete kernel dictionaries provide an elegant nonlinear extension by defining the atoms through a mapping kernel function (e.g., the gaussian kernel). Models based on such kernel dictionaries are used in neural networks, gaussian processes and online learning with kernels. The quality of an overcomplete dictionary is evaluated with a diversity measure the distance, the approximation, the coherence and the Babel measures. In this paper, we develop a framework to examine overcomplete kernel dictionaries with the entropy from information theory. Indeed, a higher value of the entropy is associated to a further uniform spread of the atoms over the space. For each of the aforementioned diversity measures, we derive lower bounds on the entropy. Several definitions of the entropy are examined, with an extensive analysis in both the input space and the mapped feature space.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2013\n \n \n (23)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Approche variationnelle à noyau pour le suivi de cibles dans un réseau de capteurs sans fil.\n \n \n \n \n\n\n \n Snoussi, H.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Giovannelli, J.; and Idier, J., editor(s), Méthodes d'inversion appliquées au traitement du signal et de l'image, pages 273 - 288. Hermes, December 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Approche paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{13.Hermes,\n  author                   = {Hichem Snoussi and Paul Honeine and Cédric Richard},\n  title                    = {Approche variationnelle à noyau pour le suivi de cibles dans un réseau de capteurs sans fil},\n  booktitle                = {Méthodes d'inversion appliquées au traitement du signal et de l'image},\n  pages                    = {273 - 288},\n  month                    = dec,\n  year                     = {2013},\n  editor                   = {Jean-François Giovannelli and Jérôme Idier},\n  publisher                = {Hermes},\n  url_paper   =  "http://honeine.fr/paul/publi/13.Hermes.pdf",\n  keywords =  "machine learning, Bayesian inference, wireless sensor networks",\n  abstract = "Ce chapitre traite le problème de localisation décentralisée de cibles mobiles dans un réseau de capteurs sans fil, s'accommodant des contraintes d'énergie et de puissance limitées des capteurs embarqués. Nous décrivons une technique de localisation bénéficiant à la fois de la consistance de l'approche bayésienne et de la robustesse des méthodes à noyau. Cette technique repose sur un filtrage variationnel en ligne intégrant une phase d'apprentissage de la fonction de vraisemblance. Cette phase d'apprentissage rend cette méthode de localisation particulièrement robuste et flexible dans un environnement inconnu et non stationnaire.",\n}\n
\n
\n\n\n
\n Ce chapitre traite le problème de localisation décentralisée de cibles mobiles dans un réseau de capteurs sans fil, s'accommodant des contraintes d'énergie et de puissance limitées des capteurs embarqués. Nous décrivons une technique de localisation bénéficiant à la fois de la consistance de l'approche bayésienne et de la robustesse des méthodes à noyau. Cette technique repose sur un filtrage variationnel en ligne intégrant une phase d'apprentissage de la fonction de vraisemblance. Cette phase d'apprentissage rend cette méthode de localisation particulièrement robuste et flexible dans un environnement inconnu et non stationnaire.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Supervised nonlinear unmixing of hyperspectral images using a pre-image method.\n \n \n \n \n\n\n \n Nguyen, N. H.; Chen, J.; Richard, C.; Honeine, P.; and Theys, C.\n\n\n \n\n\n\n In New Concepts in Imaging: Optical and Statistical Models, In Eds. D. Mary, C. Theys, and C. Aime, volume 59, of EAS Publications Series, pages 417 - 437. EDP Sciences, 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Supervised link\n  \n \n \n \"Supervised paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{13.hype.nguyen,\n Author = {Nguyen, Nguyen Hoang and Chen, Jie and Richard, Cédric and Honeine, Paul and Theys, Céline},\n Title = {Supervised nonlinear unmixing of hyperspectral images using a pre-image method},\n booktitle = "New Concepts in Imaging: Optical and Statistical Models,  \n  In Eds. D. Mary, C. Theys, and C. Aime",\n series = {EAS Publications Series},\n Pages = {417 - 437},\n Publisher = {EDP Sciences},\n Volume = {59},\n Year = {2013},\n Keywords = {hyperspectral, machine learning, pre-image problem, hyperspectral imaging, hyperspectral images, supervised spectral unmixing, fully constrained unmixing, remote sensing, unmixing problem},\n url_link= "http://dx.doi.org/10.1051/eas/1359019",\n url_paper   =  "http://honeine.fr/paul/publi/13.hype.nguyen",\n doi = "10.1051/eas/1359019",\n Abstract = {Spectral unmixing is an important issue to analyze remotely sensed hyperspectral data. This involves the decomposition of each mixed pixel into its pure endmember spectra, and the estimation of the abundance value for each endmember. Although linear mixture models are often considered because of their simplicity, there are many situations in which they can be advantageously replaced by nonlinear mixture models. In this chapter, we derive a supervised kernel-based unmixing method that relies on a pre-image problem-solving technique. The kernel selection problem is also briefly considered. We show that partially-linear kernels can serve as an appropriate solution, and the nonlinear part of the kernel can be advantageously designed with manifold-learning-based techniques. Finally, we incorporate spatial information into our method in order to improve unmixing performance.},\n}\n
\n
\n\n\n
\n Spectral unmixing is an important issue to analyze remotely sensed hyperspectral data. This involves the decomposition of each mixed pixel into its pure endmember spectra, and the estimation of the abundance value for each endmember. Although linear mixture models are often considered because of their simplicity, there are many situations in which they can be advantageously replaced by nonlinear mixture models. In this chapter, we derive a supervised kernel-based unmixing method that relies on a pre-image problem-solving technique. The kernel selection problem is also briefly considered. We show that partially-linear kernels can serve as an appropriate solution, and the nonlinear part of the kernel can be advantageously designed with manifold-learning-based techniques. Finally, we incorporate spatial information into our method in order to improve unmixing performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online kernel adaptive algorithms with dictionary adaptation for MIMO models.\n \n \n \n \n\n\n \n Saidé, C.; Lengellé, R.; Honeine, P.; and Achkar, R.\n\n\n \n\n\n\n IEEE Signal Processing Letters, 20(5): 535 - 538. May 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Online paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{13.spl.dictionary,\n   author =  "Chafic Saidé and Régis Lengellé and Paul Honeine and Roger Achkar",\n   title =  "Online kernel adaptive algorithms with dictionary adaptation for MIMO models",\n   journal =  "IEEE Signal Processing Letters",\n   year  =  "2013",\n   volume =  "20",\n   number =  "5",\n   pages =  "535 - 538",\n   month =  may,\n   doi="10.1109/LSP.2013.2254711", \n   url_paper  =  "http://www.honeine.fr/paul/publi/13.spl.dictionary_adaptation.pdf",\n   keywords  =  "machine learning, sparsity, adaptive filtering, MIMO communication, nonlinear systems, operating system kernels, online kernel adaptive algorithms, dictionary adaptation, MIMO models, nonlinear system identification, online identification, coherence criterion, stochastic gradient method, single output models, multiple inputs multiple outputs model, Dictionaries, Kernel, Adaptation models, MIMO, Coherence, Signal processing algorithms, Kernel methods, machine learning, nonlinear adaptive filters, nonlinear systems",\n   abstract={Nonlinear system identification has always been a challenging problem. The use of kernel methods to solve such problems becomes more prevalent. However, the complexity of these methods increases with time which makes them unsuitable for online identification. This drawback can be solved with the introduction of the coherence criterion. Furthermore, dictionary adaptation using a stochastic gradient method proved its efficiency. Mostly, all approaches are used to identify Single Output models which form a particular case of real problems. In this letter we investigate online kernel adaptive algorithms to identify Multiple Inputs Multiple Outputs model as well as the possibility of dictionary adaptation for such models.},\n}\n
\n
\n\n\n
\n Nonlinear system identification has always been a challenging problem. The use of kernel methods to solve such problems becomes more prevalent. However, the complexity of these methods increases with time which makes them unsuitable for online identification. This drawback can be solved with the introduction of the coherence criterion. Furthermore, dictionary adaptation using a stochastic gradient method proved its efficiency. Mostly, all approaches are used to identify Single Output models which form a particular case of real problems. In this letter we investigate online kernel adaptive algorithms to identify Multiple Inputs Multiple Outputs model as well as the possibility of dictionary adaptation for such models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Polar interval-based localization in mobile sensor networks.\n \n \n \n \n\n\n \n Mourad-Chehade, F.; Honeine, P.; and Snoussi, H.\n\n\n \n\n\n\n IEEE Transactions on Aerospace and Electronic Systems, 49(4): 2310 - 2322. October 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Polar link\n  \n \n \n \"Polar paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{13.taes.wsn,\n   author =  "Farah Mourad-Chehade and Paul Honeine and Hichem Snoussi",\n   title =  "Polar interval-based localization in mobile sensor networks",\n   journal =  "IEEE Transactions on Aerospace and Electronic Systems",\n   year  =  "2013",\n   volume =  "49",\n   number =  "4",\n   pages =  "2310 - 2322",\n   month =  oct,\n   url_link= "https://ieeexplore.ieee.org/document/6621818",\n   doi = "10.1109/TAES.2013.6621818", \n   url_paper  =  "http://www.honeine.fr/paul/publi/13.taes.wsn.pdf",\n   keywords  =  "wireless sensor networks, mobile radio, radiotelemetry, wireless sensor networks, polar-interval-based localization, mobility sensor network, MSN, connectivity measurement, polar coordinate system, PCS, position estimation, Robot sensing systems, Surveillance, Mobile communication, Mobile computing, Simulation, Global Positioning System",\n   abstract = "The problem of localization in uncontrolled mobility sensor networks (MSN) is considered. Based on connectivity measurements the problem is solved using polar intervals. Computation is performed, in several polar coordinate systems (PCSs), using both polar coordinates and interval analysis. Position estimates are thus partial rings enclosing the exact solution of the problem. Simulation results corroborate the efficiency of the proposed method compared with existing methods, especially with those handling single coordinate systems.",\n}\n
\n
\n\n\n
\n The problem of localization in uncontrolled mobility sensor networks (MSN) is considered. Based on connectivity measurements the problem is solved using polar intervals. Computation is performed, in several polar coordinate systems (PCSs), using both polar coordinates and interval analysis. Position estimates are thus partial rings enclosing the exact solution of the problem. Simulation results corroborate the efficiency of the proposed method compared with existing methods, especially with those handling single coordinate systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel autoregressive models using Yule-Walker equations.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Francis, C.; and Amoud, H.\n\n\n \n\n\n\n Signal Processing, 93(11): 3053 - 3061. November 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Kernel link\n  \n \n \n \"Kernel paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{13.sp.kar_yw,\n   author =  "Maya Kallas and Paul Honeine and Clovis Francis and Hassan Amoud",\n   title =  "Kernel autoregressive models using Yule-Walker equations",\n   journal =  "Signal Processing",\n   year  =  "2013",\n   volume =  "93",\n   number =  "11",\n   pages =  "3053 - 3061",\n   month =  nov,\n   doi = "10.1016/j.sigpro.2013.03.032",\n   url_link= "https://www.sciencedirect.com/science/article/pii/S0165168413001242",\n   url_paper  =  "http://www.honeine.fr/paul/publi/13.sp.kar_yw.pdf",\n   keywords  =  "machine learning, pre-image problem, adaptive filtering, kernel machines, autoregressive model, time series prediction, Yule–Walker equations",\n   abstract = "This paper proposes nonlinear autoregressive (AR) models for time series, within the framework of kernel machines. Two models are investigated. In the first proposed model, the AR model is defined on the mapped samples in the feature space. In order to predict a future sample, this formulation requires to solve a pre-image problem to get back to the input space. We derive an iterative technique to provide a fine-tuned solution to this problem. The second model bypasses the pre-image problem, by defining the AR model with an hybrid model, as a tradeoff considering the computational time and the precision, by comparing it to the iterative, fine-tuned, model. By considering the stationarity assumption, we derive the corresponding Yule–Walker equations for each model, and show the ease of solving these problems. The relevance of the proposed models is studied on several time series, and compared with other well-known models in terms of accuracy and computational complexity."\n}\n
\n
\n\n\n
\n This paper proposes nonlinear autoregressive (AR) models for time series, within the framework of kernel machines. Two models are investigated. In the first proposed model, the AR model is defined on the mapped samples in the feature space. In order to predict a future sample, this formulation requires to solve a pre-image problem to get back to the input space. We derive an iterative technique to provide a fine-tuned solution to this problem. The second model bypasses the pre-image problem, by defining the AR model with an hybrid model, as a tradeoff considering the computational time and the precision, by comparing it to the iterative, fine-tuned, model. By considering the stationarity assumption, we derive the corresponding Yule–Walker equations for each model, and show the ease of solving these problems. The relevance of the proposed models is studied on several time series, and compared with other well-known models in terms of accuracy and computational complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-negativity constraints on the pre-image for pattern recognition with kernel machines.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Richard, C.; Francis, C.; and Amoud, H.\n\n\n \n\n\n\n Pattern Recognition, 46(11): 3066 - 3080. November 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Non-negativity link\n  \n \n \n \"Non-negativity paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{13.pr.nn_preimage,\n   author =  "Maya Kallas and Paul Honeine and Cédric Richard and Clovis Francis and Hassan Amoud",\n   title =  "Non-negativity constraints on the pre-image for pattern recognition with kernel machines",\n   journal =  "Pattern Recognition",\n   year  =  "2013",\n   volume =  "46",\n   number =  "11",\n   pages =  "3066 - 3080",\n   month =  nov,\n   url_link= "https://www.sciencedirect.com/science/article/abs/pii/S0031320313001507",\n   doi ="10.1016/j.patcog.2013.03.021",\n   url_paper  =  "http://www.honeine.fr/paul/publi/13.pr.nn_preimage",\n   keywords  =  "machine learning, pre-image problem, adaptive filtering, kernel machines, machine learning, SVM, kernel PCA, pre-image problem, non-negativity constraints, nonlinear denoising, pattern recognition",\n   abstract = "Rules of physics in many real-life problems force some constraints to be satisfied. This paper deals with nonlinear pattern recognition under non-negativity constraints. While kernel principal component analysis can be applied for feature extraction or data denoising, in a feature space associated to the considered kernel function, a pre-image technique is required to go back to the input space, e.g., representing a feature in the space of input signals. The main purpose of this paper is to study a constrained pre-image problem with non-negativity constraints. We provide new theoretical results on the pre-image problem, including the weighted combination form of the pre-image, and demonstrate sufficient conditions for the convexity of the problem. The constrained problem is considered with the non-negativity, either on the pre-image itself or on the weights. We propose a simple iterative scheme to incorporate both constraints. A fortuitous side-effect of our method is the sparsity in the representation, a property investigated in this paper. Experimental results are conducted on artificial and real datasets, where many properties are investigated including the sparsity property, and compared to other methods from the literature. The relevance of the proposed method is demonstrated with experimentations on artificial data and on two types of real datasets in signal and image processing.",\n}\n
\n
\n\n\n
\n Rules of physics in many real-life problems force some constraints to be satisfied. This paper deals with nonlinear pattern recognition under non-negativity constraints. While kernel principal component analysis can be applied for feature extraction or data denoising, in a feature space associated to the considered kernel function, a pre-image technique is required to go back to the input space, e.g., representing a feature in the space of input signals. The main purpose of this paper is to study a constrained pre-image problem with non-negativity constraints. We provide new theoretical results on the pre-image problem, including the weighted combination form of the pre-image, and demonstrate sufficient conditions for the convexity of the problem. The constrained problem is considered with the non-negativity, either on the pre-image itself or on the weights. We propose a simple iterative scheme to incorporate both constraints. A fortuitous side-effect of our method is the sparsity in the representation, a property investigated in this paper. Experimental results are conducted on artificial and real datasets, where many properties are investigated including the sparsity property, and compared to other methods from the literature. The relevance of the proposed method is demonstrated with experimentations on artificial data and on two types of real datasets in signal and image processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear unmixing of hyperspectral data based on a linear-mixture/nonlinear-fluctuation model.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; and Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Signal Processing, 61(2): 480 - 492. January 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Nonlinear link\n  \n \n \n \"Nonlinear paper\n  \n \n \n \"Nonlinear code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 10 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{13.tsp.unmix,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine",\n   title =  "Nonlinear unmixing of hyperspectral data based on a linear-mixture/nonlinear-fluctuation model",\n   journal =  "IEEE Transactions on Signal Processing",\n   year  =  "2013",\n   volume =  "61",\n   number =  "2",\n   pages =  "480 - 492",\n   month =  jan,\n   url_link= "https://ieeexplore.ieee.org/document/6320670",\n   doi = "10.1109/TSP.2012.2222390",\n   url_paper  =  "http://www.honeine.fr/paul/publi/13.tsp.unmix.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/13.tsp.unmix.zip",\n   keywords  =  "non-negativity, hyperspectral, hyperspectral imaging, multi-kernel learning, nonlinear spectral unmixing, support vector regression, geophysical image processing, nonlinear unmixing, hyperspectral data, linear-mixture-nonlinear-fluctuation model, kernel-based paradigm, mixing mechanism, hyperspectral imaging, Hyperspectral imaging, Kernel, Materials, Estimation, Vectors, Hyperspectral imaging, multi-kernel learning, nonlinear spectral unmixing, support vector regression",\n   abstract = "Spectral unmixing is an important issue to analyze remotely sensed hyperspectral data. Although the linear mixture model has obvious practical advantages, there are many situations in which it may not be appropriate and could be advantageously replaced by a nonlinear one. In this paper, we formulate a new kernel-based paradigm that relies on the assumption that the mixing mechanism can be described by a linear mixture of endmember spectra, with additive nonlinear fluctuations defined in a reproducing kernel Hilbert space. This family of models has clear interpretation, and allows to take complex interactions of endmembers into account. Extensive experiment results, with both synthetic and real images, illustrate the generality and effectiveness of this scheme compared with state-of-the-art methods.", \n}\n
\n
\n\n\n
\n Spectral unmixing is an important issue to analyze remotely sensed hyperspectral data. Although the linear mixture model has obvious practical advantages, there are many situations in which it may not be appropriate and could be advantageously replaced by a nonlinear one. In this paper, we formulate a new kernel-based paradigm that relies on the assumption that the mixing mechanism can be described by a linear mixture of endmember spectra, with additive nonlinear fluctuations defined in a reproducing kernel Hilbert space. This family of models has clear interpretation, and allows to take complex interactions of endmembers into account. Extensive experiment results, with both synthetic and real images, illustrate the generality and effectiveness of this scheme compared with state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiclass classification machines with the complexity of a single binary classifier.\n \n \n \n \n\n\n \n Honeine, P.; Noumir, Z.; and Richard, C.\n\n\n \n\n\n\n Signal Processing, 93(5): 1013 - 1026. May 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Multiclass link\n  \n \n \n \"Multiclass paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{13.sp.multiclass,\n   author =  "Paul Honeine and Zineb Noumir and Cédric Richard",\n   title =  " Multiclass classification machines with the complexity of a single binary classifier",\n   journal =  "Signal Processing",\n   year  =  "2013",\n   volume =  "93",\n   number =  "5",\n   pages =  "1013 - 1026",\n   month =  may,\n   url_link =  "http://dx.doi.org/10.1016/j.sigpro.2012.11.009",\n   doi  =  "10.1016/j.sigpro.2012.11.009",\n  url_paper  =  "http://www.honeine.fr/paul/publi/13.sp.multiclass.pdf",\n   keywords  =  "machine learning, multiclass, multiclass classification, machine learning, SVM, one-versus-all, least-squares classification",\n  abstract = "In this paper, we study the multiclass classification problem. We derive a framework to solve this problem by providing algorithms with the complexity of a single binary classifier. The resulting multiclass machines can be decomposed into two categories. The first category corresponds to vector-output machines, where we develop several algorithms. In the second category, we show that the least-squares classifier can be easily cast into a multiclass one-versus-all scheme, without the need to train multiple binary classifiers. The proposed framework shows that, while keeping the classification accuracy essentially unchanged, the computational complexity is orders of magnitude lower than those previously reported in the literature. This makes our approach extremely powerful and conceptually simple. Moreover, we study the coding of the multiclass labels, and demonstrate that several celebrated approaches are equivalent. These arguments are illustrated with experimentations on well-known benchmarks."\n}\n
\n
\n\n\n
\n In this paper, we study the multiclass classification problem. We derive a framework to solve this problem by providing algorithms with the complexity of a single binary classifier. The resulting multiclass machines can be decomposed into two categories. The first category corresponds to vector-output machines, where we develop several algorithms. In the second category, we show that the least-squares classifier can be easily cast into a multiclass one-versus-all scheme, without the need to train multiple binary classifiers. The proposed framework shows that, while keeping the classification accuracy essentially unchanged, the computational complexity is orders of magnitude lower than those previously reported in the literature. This makes our approach extremely powerful and conceptually simple. Moreover, we study the coding of the multiclass labels, and demonstrate that several celebrated approaches are equivalent. These arguments are illustrated with experimentations on well-known benchmarks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Approches géométriques pour l'estimation des fractions d'abondance en traitement de données hyperspectales. Extensions aux modèles de mélange non linéaires.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; and Nguyen, N. H.\n\n\n \n\n\n\n Traitement du signal, 30(1-2): 61 - 86. 2013.\n \n\n\n\n
\n\n\n\n \n \n \"Approches paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{13.ts,\n   author =  "Paul Honeine and Cédric Richard and Nguyen Hoang Nguyen",\n   title =  "Approches géométriques pour l'estimation des fractions d'abondance en traitement de données hyperspectales. Extensions aux modèles de mélange non linéaires",\n   journal =  "Traitement du signal",\n   pages =  "61 - 86",\n   year  =  "2013",\n   number  =  "1-2",\n   volume  =   "30",\n   doi = "10.3166/ts.30.61-86",\n   url_paper  =  "http://www.honeine.fr/paul/publi/13.ts.demelange.pdf",\n   keywords =  "hyperspectral, traitement de données hyperspectrales, démélange non linéaire, approches géométriques, réduction de dimension, distance géodésique, hyperspectral data processing, nonlinear unmixing, geometric unmixing methods, abundance estimation, dimensionality reduction techniques, geodesic distance",\n   abstract = "In hyperspectral image unmixing, a collection of pure spectra, the so-called endmembers, is identified and their abundance fractions are estimated at each pixel. While endmembers are often extracted using a geometric approach, the abundances are usually estimated by solving an inverse problem. In this paper, we bypass the problem of abundance estimation by using a geometric point of view. The proposed framework shows that a large number of endmember extraction techniques can be adapted to jointly estimate the abundance fractions, with no additional computational complexity. This is illustrated in this paper with the N-Findr, SGA, VCA, OSP, and ICE endmember extraction techniques. A nonlinear extension is also proposed, using non linear dime nsion reduction methods such as MDS, LLE and ISOMAP. These strategies maintain the geometric unmixing algorithms unchanged, for endmember extraction as well as abundance fraction estimation. The relevance of the proposed approach is illustrated through experiments on synthesized data and real hyperspectral image.",\n}\n
\n
\n\n\n
\n In hyperspectral image unmixing, a collection of pure spectra, the so-called endmembers, is identified and their abundance fractions are estimated at each pixel. While endmembers are often extracted using a geometric approach, the abundances are usually estimated by solving an inverse problem. In this paper, we bypass the problem of abundance estimation by using a geometric point of view. The proposed framework shows that a large number of endmember extraction techniques can be adapted to jointly estimate the abundance fractions, with no additional computational complexity. This is illustrated in this paper with the N-Findr, SGA, VCA, OSP, and ICE endmember extraction techniques. A nonlinear extension is also proposed, using non linear dime nsion reduction methods such as MDS, LLE and ISOMAP. These strategies maintain the geometric unmixing algorithms unchanged, for endmember extraction as well as abundance fraction estimation. The relevance of the proposed approach is illustrated through experiments on synthesized data and real hyperspectral image.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Constrained Kaczmarz's Cyclic Projections for Unmixing Hyperspectral Data.\n \n \n \n \n\n\n \n Honeine, P.; Lantéri, H.; and Richard, C.\n\n\n \n\n\n\n In Proc. 21th European Conference on Signal Processing (EUSIPCO), pages 1-5, Marrakech, Morocco, 9 - 13 September 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Constrained link\n  \n \n \n \"Constrained paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.eusipco.Kaczmarz,\n   author =  "Paul Honeine and Henri Lantéri and Cédric Richard",\n   title =  "Constrained Kaczmarz's Cyclic Projections for Unmixing Hyperspectral Data",\n   booktitle =  "Proc. 21th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Marrakech, Morocco",\n   year  =  "2013",\n   month =  "9 - 13~" # sep,\n   acronym =  "EUSIPCO",\n   pages={1-5},\n   url_link= "https://ieeexplore.ieee.org/document/6811608",\n   url_paper   =  "http://honeine.fr/paul/publi/13.eusipco.Kaczmarz.pdf",\n   abstract={The estimation of fractional abundances under physical constraints is a fundamental problem in hyperspectral data processing. In this paper, we propose to adapt Kaczmarz's cyclic projections to solve this problem. The main contribution of this work is two-fold: On the one hand, we show that the non-negativity and the sum-to-one constraints can be easily imposed in Kaczmarz's cyclic projections, and on the second hand, we illustrate that these constraints are advantageous in the convergence behavior of the algorithm. To this end, we derive theoretical results on the convergence performance, both in the noiseless case and in the case of noisy data. Experimental results show the relevance of the proposed method.}, \n   keywords={adaptive filters, hyperspectral imaging, optimisation, constrained Kaczmarz cyclic projections, hyperspectral data unmixing, hyperspectral data processing, constrained optimization, Optimized production technology, Hyperspectral imaging, Convergence, Noise, Noise measurement, Vectors, Constrained optimization, Kaczmarz's cyclic projections, hyperspectral data, unmixing problem}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n The estimation of fractional abundances under physical constraints is a fundamental problem in hyperspectral data processing. In this paper, we propose to adapt Kaczmarz's cyclic projections to solve this problem. The main contribution of this work is two-fold: On the one hand, we show that the non-negativity and the sum-to-one constraints can be easily imposed in Kaczmarz's cyclic projections, and on the second hand, we illustrate that these constraints are advantageous in the convergence behavior of the algorithm. To this end, we derive theoretical results on the convergence performance, both in the noiseless case and in the case of noisy data. Experimental results show the relevance of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Intrusion Detection in SCADA Systems Using One-Class Classification.\n \n \n \n \n\n\n \n Nader, P.; Honeine, P.; and Beauseroy, P.\n\n\n \n\n\n\n In Proc. 21th European Conference on Signal Processing (EUSIPCO), pages 1-5, Marrakech, Morocco, 9 - 13 September 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Intrusion link\n  \n \n \n \"Intrusion paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.eusipco.oneclass,\n   author =  "Patric Nader and Paul Honeine and Pierre Beauseroy",\n   title =  "Intrusion Detection in SCADA Systems Using One-Class Classification",\n   booktitle =  "Proc. 21th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Marrakech, Morocco",\n   year  =  "2013",\n   month =  "9 - 13~" # sep,\n   acronym =  "EUSIPCO",\n   pages={1-5}, \n   url_link= "https://ieeexplore.ieee.org/document/6811620",\n   url_paper   =  "http://honeine.fr/paul/publi/13.eusipco.monoclass.pdf",\n   abstract={Supervisory Control and Data Acquisition (SCADA) systems allow remote monitoring and control of critical infrastructures such as electrical power grids, gas pipelines, nuclear power plants, etc. Cyberattacks threatening these infrastructures may cause serious economic losses and may impact the health and safety of the employees and the citizens living in the area. The diversity of cyberattacks and the complexity of the studied systems make modeling cyberattacks very difficult or even impossible. This paper outlines the importance of one-class classification in detecting intrusions in SCADA systems. Two approaches are investigated, the SupportVector Data Description and the Kernel Principal Component Analysis. A case study on a gas pipeline testbed is provided with real data containing many types of cyberattacks.}, \n   keywords={machine learning, one-class, cybersecurity, critical infrastructures, pattern classification, pipelines, principal component analysis, SCADA systems, security of data, support vector machines, intrusion detection, SCADA systems, one-class classification, supervisory control and data acquisition systems, remote monitoring, critical infrastructures, electrical power grids, gas pipelines, nuclear power plants, cyberattacks, economic losses, employee health, employee safety, cyberattack modeling, support vector data description, kernel principal component analysis, gas pipeline testbed, Kernel, Computer crime, Pipelines, SCADA systems, Training, Support vector machines, Intrusion detection, One-class classification, intrusion detection, kernel methods, novelty detection, SCADA systems}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n Supervisory Control and Data Acquisition (SCADA) systems allow remote monitoring and control of critical infrastructures such as electrical power grids, gas pipelines, nuclear power plants, etc. Cyberattacks threatening these infrastructures may cause serious economic losses and may impact the health and safety of the employees and the citizens living in the area. The diversity of cyberattacks and the complexity of the studied systems make modeling cyberattacks very difficult or even impossible. This paper outlines the importance of one-class classification in detecting intrusions in SCADA systems. Two approaches are investigated, the SupportVector Data Description and the Kernel Principal Component Analysis. A case study on a gas pipeline testbed is provided with real data containing many types of cyberattacks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralized Localization Using Fingerprinting and Kernel Methods in Wireless Sensor Networks.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n In Proc. 21th European Conference on Signal Processing (EUSIPCO), pages 1-5, Marrakech, Morocco, 9 - 13 September 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Decentralized link\n  \n \n \n \"Decentralized paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.eusipco.wsn,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Decentralized Localization Using Fingerprinting and Kernel Methods in Wireless Sensor Networks",\n   booktitle =  "Proc. 21th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Marrakech, Morocco",\n   year  =  "2013",\n   month =  "9 - 13~" # sep,\n   pages={1-5}, \n   acronym =  "EUSIPCO",\n   url_link= "https://ieeexplore.ieee.org/document/6811616",\n   url_paper   =  "https://hal.archives-ouvertes.fr/hal-01966008/document",\n   abstract={Sensors localization has become an essential issue in wireless sensor networks. This paper presents a decentralized localization algorithm that makes use of radio-location fingerprinting and kernel methods. The proposed algorithm consists of dividing the network into several zones, each of which having a calculator capable of emitting, receiving and processing data. By using radio-information of its zone, each calculator constructs, by means of kernel methods, a model estimating the nodes positions. Calculators estimates are then combined together, leading to final position estimates. Compared to centralized methods, this technique is more robust, less energy-consuming, with a lower processing complexity.}, \n   keywords={radio direction-finding, sensor placement, wireless sensor networks, decentralized localization, kernel methods, sensors localization, wireless sensor networks, radio-location fingerprinting, Calculators, Wireless sensor networks, Sensors, Mathematical model, Kernel, Trajectory, Robustness, decentralized processing, fingerprinting, localization, machine learning, wireless sensor networks}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n Sensors localization has become an essential issue in wireless sensor networks. This paper presents a decentralized localization algorithm that makes use of radio-location fingerprinting and kernel methods. The proposed algorithm consists of dividing the network into several zones, each of which having a calculator capable of emitting, receiving and processing data. By using radio-information of its zone, each calculator constructs, by means of kernel methods, a model estimating the nodes positions. Calculators estimates are then combined together, leading to final position estimates. Compared to centralized methods, this technique is more robust, less energy-consuming, with a lower processing complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-stationary Analysis of the Convergence of the Non-negative Least-mean-square Algorithm.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Bermudez, J. C. M.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 21th European Conference on Signal Processing (EUSIPCO), pages 1-5, Marrakech, Morocco, 9 - 13 September 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Non-stationary link\n  \n \n \n \"Non-stationary paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.eusipco.nnlms,\n   author =  "Jie Chen and Cédric Richard and José C. M. Bermudez and Paul Honeine",\n   title =  "Non-stationary Analysis of the Convergence of the Non-negative Least-mean-square Algorithm",\n   booktitle =  "Proc. 21th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Marrakech, Morocco",\n   year  =  "2013",\n   month =  "9 - 13~" # sep,\n   acronym =  "EUSIPCO",\n   pages={1-5}, \n   url_link= "https://ieeexplore.ieee.org/document/6811793",\n   url_paper   =  "http://honeine.fr/paul/publi/13.eusipco.nnlms.pdf",\n   abstract={Non-negativity is a widely used constraint in parameter estimation procedures due to physical characteristics of systems under investigation. In this paper, we consider an LMS-type algorithm for system identification subject to non-negativity constraints, called Non-Negative Least-Mean-Square algorithm, and its normalized variant. An important contribution of this paper is that we study the stochastic behavior of these algorithms in a non-stationary environment, where the unconstrained solution is characterized by a time-variant mean and is affected by random perturbations. Convergence analysis of these algorithms in a stationary environment can be viewed as a particular case of the convergence model derived in this paper. Simulation results are presented to illustrate the performance of the algorithm and the accuracy of the derived models.}, \n   keywords={machine learning, non-stationarity, adaptive filtering, convergence of numerical methods, least mean squares methods, parameter estimation, stochastic processes, time-varying systems, nonstationary analysis, nonnegative least mean square algorithm, parameter estimation procedures, system identification, stochastic behavior, unconstrained solution, time variant mean, convergence analysis, Abstracts, Steady-state, Non-negativity constraint, adaptive filtering, non-stationary signal, convergence analysis}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n Non-negativity is a widely used constraint in parameter estimation procedures due to physical characteristics of systems under investigation. In this paper, we consider an LMS-type algorithm for system identification subject to non-negativity constraints, called Non-Negative Least-Mean-Square algorithm, and its normalized variant. An important contribution of this paper is that we study the stochastic behavior of these algorithms in a non-stationary environment, where the unconstrained solution is characterized by a time-variant mean and is affected by random perturbations. Convergence analysis of these algorithms in a stationary environment can be viewed as a particular case of the convergence model derived in this paper. Simulation results are presented to illustrate the performance of the algorithm and the accuracy of the derived models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mobilité d'un réseau de capteurs sans fil basée sur les méthodes à noyau.\n \n \n \n \n\n\n \n Ghadban, N.; Honeine, P.; Francis, C.; Mourad-Chehade, F.; Farah, J.; and Kallas, M.\n\n\n \n\n\n\n In Actes du 24-ème Colloque GRETSI sur le Traitement du Signal et des Images, Brest, France, September 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Mobilité paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.gretsi.wsn,\n   author =  "Nisrine Ghadban and Paul Honeine and Clovis Francis and Farah Mourad-Chehade and Joumana Farah and Maya Kallas",\n   title =  "Mobilité d'un réseau de capteurs sans fil basée sur les méthodes à noyau",\n   booktitle =  "Actes du 24-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Brest, France",\n   year  =  "2013",\n   month =  sep,\n   keywords =  "machine learning, wireless sensor networks, mobility, tracking, online adaptation",\n   acronym =  "GRETSI'13",\n   url_paper   =  "http://honeine.fr/paul/publi/13.gretsi.wsn.pdf",\n   abstract = "Wireless sensor networks have received considerable attention during the last decade for efficient monitoring, due to their low cost, their easy deployment and their capacity to locally process information. This paper derives original mobility schemes that allow improving the tracking of a physical phenomenon. To this end, we use kernel-based methods to construct a local model for each sensor, using the learning process where the input is the position of the sensor and the output is the estimation of the physical phenomenon. We show that kernel-based methods provide an elegant way to optimize the model. This allows us to derive mobility schemes for sensors in such a way to improve the efficiency of the models, by minimizing the estimation error. Sensors are moved according to several optimization techniques, by considering the first and second derivatives of the approximation error. Experimentations aim at estimating a gas diffusion in the space at any location devoid of sensor.",\n   x-abstract_fr = "Les réseaux de capteurs sans fil ont reçu une attention considérable au cours de la dernière décennie en raison de leur faible coût, de la facilité de leur déploiement et de leur aptitude à être employés pour des techniques de surveillance efficaces. Le présent article propose des schémas de mobilité qui permettent d'améliorer le suivi d'un phénomène physique. A cette fin, les méthodes à noyaux permettent de construire un modèle local pour chaque capteur, dans lequel l'entrée du modèle est la position géographique du capteur et la sortie est l'estimation de la quantité physique à mesurer. Nous montrons que les méthodes à noyaux fournissent un cadre élégant à ce problème d'estimation. Cela nous permet de définir des schémas de mobilité des capteurs pour améliorer l'efficacité des modèles, de façon à minimiser l'erreur d'approximation. Ainsi, plusieurs techniques d'optimisation sont-elles proposées en tenant compte de la première et de la seconde dérivée de l'erreur d'approximation. Les expérimentations réalisées dans ce cadre, visant à estimer la diffusion d'un gaz, montrent la pertinence de la méthode proposée.",\n}\n
\n
\n\n\n
\n Wireless sensor networks have received considerable attention during the last decade for efficient monitoring, due to their low cost, their easy deployment and their capacity to locally process information. This paper derives original mobility schemes that allow improving the tracking of a physical phenomenon. To this end, we use kernel-based methods to construct a local model for each sensor, using the learning process where the input is the position of the sensor and the output is the estimation of the physical phenomenon. We show that kernel-based methods provide an elegant way to optimize the model. This allows us to derive mobility schemes for sensors in such a way to improve the efficiency of the models, by minimizing the estimation error. Sensors are moved according to several optimization techniques, by considering the first and second derivatives of the approximation error. Experimentations aim at estimating a gas diffusion in the space at any location devoid of sensor.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptation en ligne d'un dictionnaire pour les méthodes à noyau.\n \n \n \n \n\n\n \n Saidé, C.; Honeine, P.; Lengellé, R.; Richard, C.; and Achkar, R.\n\n\n \n\n\n\n In Actes du 24-ème Colloque GRETSI sur le Traitement du Signal et des Images, Brest, France, September 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Adaptation paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.gretsi.dictionary,\n   author =  "Chafic Saidé and Paul Honeine and Régis Lengellé and Cédric Richard and Roger Achkar",\n   title =  "Adaptation en ligne d'un dictionnaire pour les méthodes à noyau",\n   booktitle =  "Actes du 24-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Brest, France",\n   year  =  "2013",\n   month =  sep,\n   keywords =  "machine learning, sparsity, dictionary learning, adaptive learning, online system adaptation, kernel methods",\n   acronym =  "GRETSI'13",\n   url_paper   =  "http://honeine.fr/paul/publi/13.gretsi.dictionary.pdf",\n   abstract = "This article tackles the online identification problem for nonlinear and nonstationary systems using kernel methods. The order of the model is controlled by the coherence criterion considered as a sparsification technique which leads to select the most relevant kernel functions to form a dictionary. We explore the dictionary adaptation using a stochastic gradient descent method along with an online kernel identification algorithm. For the latter, without limitation, it may be the kernel recursive least squares algorithm or the kernel affine projection algorithm. The proposed method leads to a reduction of the instantaneous quadratic error and to a decrease in the model's order.",\n   x-abstract_fr = "Cet article traite du problème de l'identification en-ligne des systèmes non linéaires et non stationnaires par les méthodes à noyau. L'ordre des modèles est contrôlé par le critère de cohérence utilisé comme critère de parcimonie, qui mène à sélectionner les fonctions noyau les plus pertinentes au sens de ce critère, formant ainsi un dictionnaire. On exploite l'adaptation du dictionnaire en proposant une méthode de descente de gradient stochastique qui s'applique conjointement à l'estimation en ligne des coefficients du modèle à noyau. un algorithme d'identification en ligne à noyau. Pour ce dernier, sans limitation, il peut s'agir de l'algorithme de moindres carrés récursif à noyau ou de projection affine à noyau. La méthode proposée permet une diminution de l'erreur quadratique instantanée et une réduction de la complexité du modèle.",\n}\n
\n
\n\n\n
\n This article tackles the online identification problem for nonlinear and nonstationary systems using kernel methods. The order of the model is controlled by the coherence criterion considered as a sparsification technique which leads to select the most relevant kernel functions to form a dictionary. We explore the dictionary adaptation using a stochastic gradient descent method along with an online kernel identification algorithm. For the latter, without limitation, it may be the kernel recursive least squares algorithm or the kernel affine projection algorithm. The proposed method leads to a reduction of the instantaneous quadratic error and to a decrease in the model's order.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Identification en ligne avec régularisation L1. Algorithme et analyse de convergence en environnement non-stationnaire.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Bermudez, J. C. M.; and Honeine, P.\n\n\n \n\n\n\n In Actes du 24-ème Colloque GRETSI sur le Traitement du Signal et des Images, Brest, France, September 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Identification paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.gretsi.identification,\n   author =  "Jie Chen and Cédric Richard and José C. M. Bermudez and Paul Honeine",\n   title =  "Identification en ligne avec régularisation L1. Algorithme et analyse de convergence en environnement non-stationnaire",\n   booktitle =  "Actes du 24-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Brest, France",\n   year  =  "2013",\n   month =  sep,\n   keywords  =  "machine learning, sparsity, adaptive filtering, non-stationarity",\n   acronym =  "GRETSI'13",\n   url_paper   =  "http://honeine.fr/paul/publi/13.gretsi.sparseonline.pdf",\n   abstract = "This paper presents an online system identification method with L1-norm regularization. Convergence analysis is performed for non-stationary environments. This work is a significant extension of the non-negative LMS in both aspects of algorithm derivation and convergence analysis. According to its performance and computational cost, the proposed algorithm performs similarly as the LMS algorithm but incorporates L1-regularization. Experiments validate the proposed algorithm and its convergence analysis",\n   x-abstract_fr = "Cet article présente une méthode d'identification en ligne de système linéaire avec régularisation L1. L'analyse de convergence est effectuée pour des environnements non-stationnaires. Ce travail est une extension significative des algorithmes de la famille non-negative LMS, à la fois dans la forme de l'algorithme et de l'analyse de convergence. Par ses performances et son coût calculatoire réduit, l'algorithme présente des caractéristiques comparables à l'algorithme LMS tout en prenant en compte la régularisation L1. Des simulations valident la méthode proposée et le modèle de convergence.",\n}\n
\n
\n\n\n
\n This paper presents an online system identification method with L1-norm regularization. Convergence analysis is performed for non-stationary environments. This work is a significant extension of the non-negative LMS in both aspects of algorithm derivation and convergence analysis. According to its performance and computational cost, the proposed algorithm performs similarly as the LMS algorithm but incorporates L1-regularization. Experiments validate the proposed algorithm and its convergence analysis\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Constrained reflect-then-combine methods for unmixing hyperspectral data.\n \n \n \n \n\n\n \n Honeine, P.; and Lantéri, H.\n\n\n \n\n\n\n In Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), Gainesville, Florida, USA, 25 - 28 June 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Constrained link\n  \n \n \n \"Constrained paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.whispers.constrained,\n   author =  "Paul Honeine and Henri Lantéri",\n   title =  "Constrained reflect-then-combine methods for unmixing hyperspectral data",\n   booktitle =  "Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS)",\n   address =  "Gainesville, Florida, USA",\n   year  =  "2013",\n   month =  "25 - 28~" # jun,\n   acronym =  "WHISPERS",\n   url_link= "https://ieeexplore.ieee.org/document/8080643",\n   url_paper   =  "http://honeine.fr/paul/publi/13.whispers.cimmino.pdf",\n   abstract={This paper deals with the linear unmixing problem in hyperspectral data processing, and in particular the estimation of the fractional abundances under sum-to-one and non-negativity constraints. For this purpose, we propose to adapt the reflect-then-combine iterative technique, initially derived by Cimmino. Several strategies are studied in order to handle the constraints, and experimental results are analyzed.}, \n   keywords={hyperspectral imaging, image processing, iterative methods, remote sensing, unmixing hyperspectral data, linear unmixing problem, hyperspectral data processing, reflect-then-combine iterative technique, Convergence, Optimization, Hyperspectral imaging, Mathematical model, Data processing, Estimation, Additives, Constrained optimization, hyperspectral data, unmixing problem, parallel projection, Cimmino's method}, \n   doi={10.1109/WHISPERS.2013.8080643}, \n   ISSN={2158-6276}, \n}\n
\n
\n\n\n
\n This paper deals with the linear unmixing problem in hyperspectral data processing, and in particular the estimation of the fractional abundances under sum-to-one and non-negativity constraints. For this purpose, we propose to adapt the reflect-then-combine iterative technique, initially derived by Cimmino. Several strategies are studied in order to handle the constraints, and experimental results are analyzed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimating abundance fractions of materials in hyperspectral images by fitting a post-nonlinear mixing model.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; and Honeine, P.\n\n\n \n\n\n\n In Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), Gainesville, Florida, USA, 25 - 28 June 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Estimating link\n  \n \n \n \"Estimating paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.whispers.postnonlinear,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine",\n   title =  "Estimating abundance fractions of materials in hyperspectral images by fitting a post-nonlinear mixing model",\n   booktitle =  "Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS)",\n   address =  "Gainesville, Florida, USA",\n   year  =  "2013",\n   month =  "25 - 28~" # jun,\n   keywords  =  "hyperspectral, machine learning",\n   acronym =  "WHISPERS",\n   url_link= "https://ieeexplore.ieee.org/document/8080639",\n   url_paper   =  "http://honeine.fr/paul/publi/13.whispers.postnonlinear.pdf",\n   abstract={Within the area of hyperspectral data processing, nonlinear unmixing techniques have emerged as promising alternatives for overcoming the limitations of linear methods. In this paper, we consider the class of post-nonlinear mixing models of the partially linear form. More precisely, these composite models consist of a linear mixing part and a nonlinear fluctuation term defined in a reproducing kernel Hilbert space, both terms being parameterized by the endmember spectral signatures and their respective abundances. These models consider that the reproducing kernel may also depend advantageously on the fractional abundances. An iterative algorithm is then derived to jointly estimate the fractional abundances and to infer the nonlinear functional term.}, \n   keywords={geophysical image processing, Hilbert spaces, hyperspectral imaging, iterative methods, nonlinear functions, fractional abundances, nonlinear functional term, hyperspectral images, post-nonlinear mixing model, hyperspectral data processing, nonlinear unmixing techniques, linear methods, partially linear form, composite models, linear mixing part, nonlinear fluctuation term, reproducing kernel Hilbert space, respective abundances, endmember spectral signatures, iterative algorithm, Kernel, Mixture models, Hyperspectral imaging, Hilbert space, Data processing, Iterative methods, Nonlinear unmixing, post-nonlinear mixing model, kernel methods, hyperspectral data processing}, \n   doi={10.1109/WHISPERS.2013.8080639}, \n   ISSN={2158-6276}, \n}\n
\n
\n\n\n
\n Within the area of hyperspectral data processing, nonlinear unmixing techniques have emerged as promising alternatives for overcoming the limitations of linear methods. In this paper, we consider the class of post-nonlinear mixing models of the partially linear form. More precisely, these composite models consist of a linear mixing part and a nonlinear fluctuation term defined in a reproducing kernel Hilbert space, both terms being parameterized by the endmember spectral signatures and their respective abundances. These models consider that the reproducing kernel may also depend advantageously on the fractional abundances. An iterative algorithm is then derived to jointly estimate the fractional abundances and to infer the nonlinear functional term.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel-based localization using fingerprinting in wireless sensor networks.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n In Proc. 14th IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pages 744 - 748, Darmstadt, Germany, 16 - 19 June 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Kernel-based link\n  \n \n \n \"Kernel-based paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.spawc.wsn,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Kernel-based localization using fingerprinting in wireless sensor networks",\n   booktitle =  "Proc. 14th IEEE Workshop on Signal Processing Advances in Wireless Communications (SPAWC)",\n   address =  "Darmstadt, Germany",\n   year  =  "2013",\n   pages={744 - 748},\n   month =  "16 - 19~" # jun,\n   doi={10.1109/SPAWC.2013.6612149},\n   ISSN={1948-3244},\n   acronym =  "SPAWC",\n   url_link= "https://ieeexplore.ieee.org/document/6612149",\n   url_paper   =  "http://honeine.fr/paul/publi/13.spawc.wsn.pdf",\n   abstract={Indoor localization is an important issue in wireless sensor networks for a very large number of applications. Recently, localization techniques based on the received signal strength indicator (RSSI) have been widely used due to their simple and low cost implementation. In this paper, we propose an algorithm for localization in wireless sensor networks based on radio-location fingerprinting and kernel methods. The proposed method is compared to another well-known localization algorithm in the case of real data collected in an indoor environment where RSSI measures are affected by noise and other interferences.}, \n   keywords={indoor radio, radiofrequency interference, telecommunication signalling, wireless sensor networks, wireless sensor networks, kernel-based localization, indoor localization, received signal strength indicator, RSSI, radio-location fingerprinting, kernel methods, indoor environment, interference, Wireless communication, Conferences, Signal processing, Kernel, Real-time systems, Signal processing algorithms, Wireless sensor networks, fingerprinting, kernel methods, localization, wireless sensor networks}, \n}\n
\n
\n\n\n
\n Indoor localization is an important issue in wireless sensor networks for a very large number of applications. Recently, localization techniques based on the received signal strength indicator (RSSI) have been widely used due to their simple and low cost implementation. In this paper, we propose an algorithm for localization in wireless sensor networks based on radio-location fingerprinting and kernel methods. The proposed method is compared to another well-known localization algorithm in the case of real data collected in an indoor environment where RSSI measures are affected by noise and other interferences.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear unmixing of hyperspectral data with partially linear least-squares support vector regression.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Ferrari, A.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 38th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2174 - 2178, Vancouver, Canada, May 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Nonlinear link\n  \n \n \n \"Nonlinear paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.icassp.hype,\n   author =  "Jie Chen and Cédric Richard and André Ferrari and Paul Honeine",\n   title =  "Nonlinear unmixing of hyperspectral data with partially linear least-squares support vector regression",\n   booktitle =  "Proc. 38th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Vancouver, Canada",\n   month =  may,\n   year  =  "2013",\n   pages={2174 - 2178},\n   doi={10.1109/ICASSP.2013.6638039}, \n   ISSN={1520-6149},\n   keywords  =  "machine learning, hyperspectral",\n   acronym =  "ICASSP",\n   url_link= "https://ieeexplore.ieee.org/document/6638039",\n   url_paper   =  "http://honeine.fr/paul/publi/13.icassp.hype.pdf",\n   abstract={In recent years, nonlinear unmixing of hyperspectral data has become an attractive topic in hyperspectral image analysis, because nonlinear models appear as more appropriate to represent photon interactions in real scenes. For this challenging problem, nonlinear methods operating in reproducing kernel Hilbert spaces have shown particular advantages. In this paper, we derive an efficient nonlinear unmixing algorithm based on a recently proposed linear mixture/ nonlinear fluctuation model. A multi-kernel learning support vector regressor is established to determine material abundances and nonlinear fluctuations. Moreover, a low complexity locally-spatial regularizer is incorporated to enhance the unmixing performance. Experiments with synthetic and real data illustrate the effectiveness of the proposed method.}, \n   keywords={Hilbert spaces, hyperspectral imaging, image processing, least squares approximations, regression analysis, support vector machines, multikernel learning support vector regressor, nonlinear fluctuation model, linear mixture model, kernel Hilbert spaces, photon interactions, hyperspectral image analysis, partially linear least-squares support vector regression, hyperspectral data, nonlinear unmixing, Hyperspectral imaging, Kernel, Materials, Vectors, Support vector machines, Nonlinear unmixing, hyperspectral image, support vector regression, multi-kernel learning, spatial regularization}, \n   ISSN={1520-6149}, \n}\n\n
\n
\n\n\n
\n In recent years, nonlinear unmixing of hyperspectral data has become an attractive topic in hyperspectral image analysis, because nonlinear models appear as more appropriate to represent photon interactions in real scenes. For this challenging problem, nonlinear methods operating in reproducing kernel Hilbert spaces have shown particular advantages. In this paper, we derive an efficient nonlinear unmixing algorithm based on a recently proposed linear mixture/ nonlinear fluctuation model. A multi-kernel learning support vector regressor is established to determine material abundances and nonlinear fluctuations. Moreover, a low complexity locally-spatial regularizer is incorporated to enhance the unmixing performance. Experiments with synthetic and real data illustrate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation locale d'un champ de diffusion par modèles à noyaux.\n \n \n \n \n\n\n \n Ghadban, N.; Honeine, P.; Francis, C.; Mourad-Chehade, F.; Farah, J.; and Kallas, M.\n\n\n \n\n\n\n In Actes de la 14-ème conférence ROADEF de la Société Française de Recherche Opérationnelle et Aide à la Décision, Troyes, France, 13 - 15 February 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Estimation paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.roadef.nisrine,\n   author =  "Nisrine Ghadban and Paul Honeine and Clovis Francis and Farah Mourad-Chehade and Joumana Farah and Maya Kallas",\n   title =  "Estimation locale d'un champ de diffusion par modèles à noyaux",\n   booktitle =  "Actes de la 14-ème conférence ROADEF de la Société Française de Recherche Opérationnelle et Aide à la Décision",\n   address =  "Troyes, France",\n   year  =  "2013",\n   month =  "13 - 15~" # feb,\n   keywords  =  "machine learning, wireless sensor networks",\n   acronym =  "ROADEF",\n   url_paper   =  "http://honeine.fr/paul/publi/13.roadef.abstract.nisrine.pdf",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Localisation par fingerprinting et méthodes à noyaux dans les réseaux de capteurs sans fil.\n \n \n \n \n\n\n \n Mahfouz, S.; Mourad-Chehade, F.; Honeine, P.; Farah, J.; and Snoussi, H.\n\n\n \n\n\n\n In Actes de la 14-ème conférence ROADEF de la Société Française de Recherche Opérationnelle et Aide à la Décision, Troyes, France, 13 - 15 February 2013. \n \n\n\n\n
\n\n\n\n \n \n \"Localisation paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{13.roadef.sandy,\n   author =  "Sandy Mahfouz and Farah Mourad-Chehade and Paul Honeine and Joumana Farah and Hichem Snoussi",\n   title =  "Localisation par fingerprinting et méthodes à noyaux dans les réseaux de capteurs sans fil",\n   booktitle =  "Actes de la 14-ème conférence ROADEF de la Société Française de Recherche Opérationnelle et Aide à la Décision",\n   address =  "Troyes, France",\n   year  =  "2013",\n   month =  "13 - 15~" # feb,\n   keywords  =  "machine learning, wireless sensor networks, kernel methods, localisation, réseaux de capteurs, fingerprinting, méthodes à noyaux",\n   acronym =  "ROADEF",\n   url_paper   =  "http://honeine.fr/paul/publi/13.roadef.abstract.sandy.pdf",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Contributions en traitement du signal par méthodes d'apprentissage à noyaux.\n \n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n Ph.D. Thesis, Habilitation à Diriger des Recherches, de l'Ecole Doctorale de l'Université de Technologie de Compiègne, France, December 2013.\n 164 pages\n\n\n\n
\n\n\n\n \n \n \"Contributions paper\n  \n \n \n \"Contributions link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@PHDTHESIS{13.HDR,\n   author =  "Paul Honeine",\n   title =  "Contributions en traitement du signal par méthodes d'apprentissage à noyaux",\ntype = {{HDR}},\n   school =  "Habilitation à Diriger des Recherches, de l'Ecole Doctorale de l'Université de Technologie de Compiègne",\n   address =  "France",\n   year  =  "2013",\n   month  =  dec,\n   note  =  "164 pages",\n   keywords  =  "non-negativity, pre-image problem, wireless sensor networks, hyperspectral, non-stationarity, machine learning, sparsity, adaptive filtering",\n   url_paper   =  "http://honeine.fr/paul/publi/HDR_Paul_HONEINE.pdf",\n   url_link = "http://dx.doi.org/10.13140/RG.2.2.34558.69448",\n   doi="10.13140/RG.2.2.34558.69448",\n} \n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2012\n \n \n (20)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Time-frequency learning machines for nonstationarity detection using surrogates.\n \n \n \n\n\n \n Borgnat, P.; Flandrin, P.; Richard, C.; Ferrari, A.; Amoud, H.; and Honeine, P.\n\n\n \n\n\n\n In Advances in Machine Learning and Data Mining for Astronomy, In Eds. M. Way, J. Scargle, K. Ali, and A. Srivastava, of Data Mining and Knowledge Discovery series, 22, pages 487 - 503. Chapman and Hall / CRC Press (Taylor and Francis), April 2012.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{12.tfm,\n   author =  "Pierre Borgnat and Patrick Flandrin and Cédric Richard and André Ferrari and Hassan Amoud and Paul Honeine ",\n   title =  "Time-frequency learning machines for nonstationarity detection using surrogates",\n   booktitle =  "Advances in Machine Learning and Data Mining for Astronomy, \n   In Eds. M. Way, J. Scargle, K. Ali, and A. Srivastava",\n   series =  "Data Mining and Knowledge Discovery series",\n   publisher =  "Chapman and Hall / CRC Press (Taylor and Francis)",\n   chapter =  "22",\n   pages =  "487 - 503",\n   year  =  "2012",\n   month  =  apr,\n   isbn  = "978-1-4398-4173-0",\n   keywords  =  {non-stationarity, machine learning, time-frequency analysis, non-stationary detection, surrogates, non-stationarity},\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online kernel principal component analysis: a reduced-order model.\n \n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(9): 1814 - 1826. September 2012.\n \n\n\n\n
\n\n\n\n \n \n \"Online link\n  \n \n \n \"Online paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{12.tpami,\n   author =  "Paul Honeine",\n   title =  "Online kernel principal component analysis: a reduced-order model",\n   journal =  "IEEE Transactions on Pattern Analysis and Machine Intelligence",\n   year  =  "2012",\n   volume =  "34",\n   number =  "9",\n   pages =  "1814 - 1826",\n   month =  sep,\n   url_link= "https://ieeexplore.ieee.org/document/6112772",\n   doi = "10.1109/TPAMI.2011.270",\n   url_paper  =  "http://www.honeine.fr/paul/publi/12.tpami.onlineKPCA.pdf",\n   keywords  =  "non-stationarity, machine learning, pre-image problem, sparsity, adaptive filtering, data analysis, function approximation, principal component analysis, reduced order systems, online kernel principal component analysis, reduced-order model, data analysis, dimensionality reduction techniques, online algorithm, Oja rule, linear principal axe extraction, kernel-based machines, principal function approximation, synthetic data set, handwritten digit image, classical kernel-PCA, iterative kernel-PCA, Kernel, Principal component analysis, Eigenvalues and eigenfunctions, Dictionaries, Algorithm design and analysis, Data models, Training data, Principal component analysis, online algorithm, machine learning, reproducing kernel, Oja's rule, recursive algorithm.",\n   abstract={Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.},  \n}\n
\n
\n\n\n
\n Kernel principal component analysis (kernel-PCA) is an elegant nonlinear extension of one of the most used data analysis and dimensionality reduction techniques, the principal component analysis. In this paper, we propose an online algorithm for kernel-PCA. To this end, we examine a kernel-based version of Oja's rule, initially put forward to extract a linear principal axe. As with most kernel-based machines, the model order equals the number of available observations. To provide an online scheme, we propose to control the model order. We discuss theoretical results, such as an upper bound on the error of approximating the principal functions with the reduced-order model. We derive a recursive algorithm to discover the first principal axis, and extend it to multiple axes. Experimental results demonstrate the effectiveness of the proposed approach, both on synthetic data set and on images of handwritten digits, with comparison to classical kernel-PCA and iterative kernel-PCA.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Geometric unmixing of large hyperspectral images: a barycentric coordinate approach.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n IEEE Transactions on Geoscience and Remote Sensing, 50(6): 2185 - 2195. June 2012.\n \n\n\n\n
\n\n\n\n \n \n \"Geometric paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{12.tgrs.barycenters,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "Geometric unmixing of large hyperspectral images: a barycentric coordinate approach",\n   journal =  "IEEE Transactions on Geoscience and Remote Sensing",\n   year  =  "2012",\n   volume =  "50",\n   number =  "6",\n   pages =  "2185 - 2195",\n   month =  jun,\n   doi="10.1109/TGRS.2011.2170999", \n   url_paper  =  "http://www.honeine.fr/paul/publi/11.tgrs.barycenters.pdf",\n   keywords  =  "hyperspectral, computational complexity, feature extraction, geometry, geophysical image processing, least squares approximations, remote sensing, statistical analysis, geometric unmixing, barycentric coordinate approach, hyperspectral imaging, endmember extraction techniques, statistical formulation, geometrical formulation, least-squares solution, abundance estimation, endmember extraction techniques, computational complexity, N-Findr, vertex component analysis, iterated constrained endmembers, Hyperspectral imaging, Linear systems, Estimation, Algorithm design and analysis, Ice, Vectors, Abundance estimation, Cramer's rule, endmember extraction, hyperspectral image, iterated constrained endmembers algorithm, N-Findr, orthogonal subspace projection, simplex, simplex growing algorithm, unmixing spectral data, vertex component analysis",\n   abstract={In hyperspectral imaging, spectral unmixing is one of the most challenging and fundamental problems. It consists of breaking down the spectrum of a mixed pixel into a set of pure spectra, called endmembers, and their contributions, called abundances. Many endmember extraction techniques have been proposed in literature, based on either a statistical or a geometrical formulation. However, most, if not all, of these techniques for estimating abundances use a least-squares solution. In this paper, we show that abundances can be estimated using a geometric formulation. To this end, we express abundances with the barycentric coordinates in the simplex defined by endmembers. We propose to write them in terms of a ratio of volumes or a ratio of distances, which are quantities that are often computed to identify endmembers. This property allows us to easily incorporate abundance estimation within conventional endmember extraction techniques, without incurring additional computational complexity. We use this key property with various endmember extraction techniques, such as N-Findr, vertex component analysis, simplex growing algorithm, and iterated constrained endmembers. The relevance of the method is illustrated with experimental results on real hyperspectral images.}, \n}\n
\n
\n\n\n
\n In hyperspectral imaging, spectral unmixing is one of the most challenging and fundamental problems. It consists of breaking down the spectrum of a mixed pixel into a set of pure spectra, called endmembers, and their contributions, called abundances. Many endmember extraction techniques have been proposed in literature, based on either a statistical or a geometrical formulation. However, most, if not all, of these techniques for estimating abundances use a least-squares solution. In this paper, we show that abundances can be estimated using a geometric formulation. To this end, we express abundances with the barycentric coordinates in the simplex defined by endmembers. We propose to write them in terms of a ratio of volumes or a ratio of distances, which are quantities that are often computed to identify endmembers. This property allows us to easily incorporate abundance estimation within conventional endmember extraction techniques, without incurring additional computational complexity. We use this key property with various endmember extraction techniques, such as N-Findr, vertex component analysis, simplex growing algorithm, and iterated constrained endmembers. The relevance of the method is illustrated with experimental results on real hyperspectral images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernels for time series of exponential decay/growth processes.\n \n \n \n \n\n\n \n Noumir, Z.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. 22nd IEEE workshop on Machine Learning for Signal Processing (MLSP), pages 1-6, Santander, Spain, 23 - 26 September 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Kernels link\n  \n \n \n \"Kernels paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.mlsp.kernels,\n   author =  "Zineb Noumir and Paul Honeine and Cédric Richard",\n   title =  "Kernels for time series of exponential decay/growth processes",\n   booktitle =  "Proc. 22nd IEEE workshop on Machine Learning for Signal Processing (MLSP)",\n   address =  "Santander, Spain",\n   year  =  "2012",\n   month =  "23 - 26~" # sep,\n   pages={1-6}, \n   doi={10.1109/MLSP.2012.6349753}, \n   ISSN={1551-2541},\n   acronym =  "MLSP",\n   url_link= "https://ieeexplore.ieee.org/document/6349753",\n   url_paper   =  "http://honeine.fr/paul/publi/12.mlsp.kernels.pdf",\n   abstract={Many processes exhibit exponential behavior. When kernel-based machines are applied on this type of data, conventional kernels such as the Gaussian kernel are not appropriate. In this paper, we derive kernels adapted to time series of exponential decay or growth processes. We provide a theoretical study of these kernels, including the issue of universality. Experimental results are given on a case study: chlorine decay in water distribution systems.}, \n   keywords={machine learning, one-class, cybersecurity, chemical reactions, chemistry computing, chlorine, exponential distribution, support vector machines, time series, time series, exponential decay processes, growth processes, exponential behavior, kernel-based machines, conventional kernels, Gaussian kernel, chlorine decay, water distribution systems, Kernel, Support vector machines, Machine learning, Time series analysis, Temperature measurement, Chemicals, Training, Kernel function, one-class classification, normalization, kernel methods, support vector machines}, \n}\n
\n
\n\n\n
\n Many processes exhibit exponential behavior. When kernel-based machines are applied on this type of data, conventional kernels such as the Gaussian kernel are not appropriate. In this paper, we derive kernels adapted to time series of exponential decay or growth processes. We provide a theoretical study of these kernels, including the issue of universality. Experimental results are given on a case study: chlorine decay in water distribution systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online One-Class Machines Based on the Coherence Criterion.\n \n \n \n \n\n\n \n Noumir, Z.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. 20th European Conference on Signal Processing (EUSIPCO), pages 664 - 668, Bucharest, Romania, 27 - 31 August 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Online link\n  \n \n \n \"Online paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.eusipco.oneclass,\n   author =  "Zineb Noumir and Paul Honeine and Cédric Richard",\n   title =  "Online One-Class Machines Based on the Coherence Criterion",\n   booktitle =  "Proc. 20th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Bucharest, Romania",\n   year  =  "2012",\n   month =  "27 - 31~" # aug,\n   pages={664 - 668},\n   ISSN={2219-5491},\n   acronym =  "EUSIPCO",\n   url_link= "https://ieeexplore.ieee.org/document/6334204",\n   url_paper   =  "http://honeine.fr/paul/publi/12.eusipco.1c_online.pdf",\n   abstract={In this paper, we investigate a novel online one-class classification method. We consider a least-squares optimization problem, where the model complexity is controlled by the coherence criterion as a sparsification rule. This criterion is coupled with a simple updating rule for online learning, which yields a low computational demanding algorithm. Experiments conducted on time series illustrate the relevance of our approach to existing methods.}, \n   keywords={machine learning, sparsity, adaptive filtering, one-class, cybersecurity, learning (artificial intelligence), least squares approximations, optimisation, pattern classification, online one-class machines, online one-class classification method, least-squares optimization problem, online learning, low computational demanding algorithm, Support vector machines, Coherence, Dictionaries, Kernel, Time series analysis, Optimization, Signal processing algorithms, support vector machines, kernel methods, one-class classification, online learning, coherence parameter}, \n}\n
\n
\n\n\n
\n In this paper, we investigate a novel online one-class classification method. We consider a least-squares optimization problem, where the model complexity is controlled by the coherence criterion as a sparsification rule. This criterion is coupled with a simple updating rule for online learning, which yields a low computational demanding algorithm. Experiments conducted on time series illustrate the relevance of our approach to existing methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dictionary Adaptation for Online Prediction of Time Series Data with Kernels.\n \n \n \n \n\n\n \n Saidé, C.; Lengellé, R.; Honeine, P.; Richard, C.; and Achkar, R.\n\n\n \n\n\n\n In Proc. IEEE workshop on Statistical Signal Processing (SSP), pages 604 - 607, Ann Arbor, Michigan, USA, 5 - 8 August 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Dictionary link\n  \n \n \n \"Dictionary paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.ssp.dictionary,\n   author =  "Chafic Saidé and Régis Lengellé and Paul Honeine and Cédric Richard and Roger Achkar",\n   title =  "Dictionary Adaptation for Online Prediction of Time Series Data with Kernels",\n   booktitle =  "Proc. IEEE workshop on Statistical Signal Processing (SSP)",\n   address =  "Ann Arbor, Michigan, USA",\n   year  =  "2012",\n   month =  "5 - 8~" # aug,\n   pages={604 - 607},\n   doi={10.1109/SSP.2012.6319772},\n   acronym =  "SSP",\n   url_link= "https://ieeexplore.ieee.org/document/6319772",\n   url_paper   =  "http://honeine.fr/paul/publi/12.ssp.dictionary.pdf",\n   abstract={During the last few years, kernel methods have been very useful to solve nonlinear identification problems. The main drawback of these methods resides in the fact that the number of elements of the kernel development, i.e., the size of the dictionary, increases with the number of input data, making the solution not suitable for online problems especially time series applications. Recently, Richard, Bermudez and Honeine investigated a method where the size of the dictionary is controlled by a coherence criterion. In this paper, we extend this method by adjusting the dictionary elements in order to reduce the residual error and/or the average size of the dictionary. The proposed method is implemented for time series prediction using the kernel-based affine projection algorithm.}, \n   keywords={machine learning, sparsity, adaptive filtering, signal processing, time series, dictionary adaptation, online prediction, time series data, nonlinear identification problems, kernel development, coherence criterion, dictionary elements, residual error, time series prediction, kernel-based affine projection algorithm, signal processing, Dictionaries, Kernel, Coherence, Time series analysis, Vectors, Projection algorithms, Approximation error, Nonlinear adaptive filters, machine learning, nonlinear systems, kernel methods}, \n   ISSN={2373-0803}, \n}\n
\n
\n\n\n
\n During the last few years, kernel methods have been very useful to solve nonlinear identification problems. The main drawback of these methods resides in the fact that the number of elements of the kernel development, i.e., the size of the dictionary, increases with the number of input data, making the solution not suitable for online problems especially time series applications. Recently, Richard, Bermudez and Honeine investigated a method where the size of the dictionary is controlled by a coherence criterion. In this paper, we extend this method by adjusting the dictionary elements in order to reduce the residual error and/or the average size of the dictionary. The proposed method is implemented for time series prediction using the kernel-based affine projection algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n One-class Machines Based on the Coherence Criterion.\n \n \n \n \n\n\n \n Noumir, Z.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. IEEE workshop on Statistical Signal Processing (SSP), pages 600 - 603, Ann Arbor, Michigan, USA, 5 - 8 August 2012. \n \n\n\n\n
\n\n\n\n \n \n \"One-class link\n  \n \n \n \"One-class paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.ssp.one_class,\n   author =  "Zineb Noumir and Paul Honeine and Cédric Richard",\n   title =  "One-class Machines Based on the Coherence Criterion",\n   booktitle =  "Proc. IEEE workshop on Statistical Signal Processing (SSP)",\n   address =  "Ann Arbor, Michigan, USA",\n   year  =  "2012",\n   month =  "5 - 8~" # aug,\n   pages={600 - 603},\n   doi={10.1109/SSP.2012.6319771}, \n   acronym =  "SSP",\n   url_link= "https://ieeexplore.ieee.org/document/6319771",\n   url_paper   =  "http://honeine.fr/paul/publi/12.ssp.1c_coherence.pdf",\n   abstract={The one-class classification problemis often addressed by solving a constrained quadratic optimization problem, in the same spirit as support vector machines. In this paper, we derive a novel one-class classification approach, by investigating an original sparsification criterion. This criterion, known as the coherence criterion, is based on a fundamental quantity that describes the behavior of dictionaries in sparse approximation problems. The proposed framework allows us to derive new theoretical results. We associate the coherence criterion with a one-class classification algorithm by solving a least-squares optimization problem. We also provide an adaptive updating scheme. Experiments are conducted on real datasets and time series, illustrating the relevance of our approach to existing methods in both accuracy and computational efficiency.}, \n   keywords={machine learning, sparsity, adaptive filtering, one-class, cybersecurity, approximation theory, constraint handling, dictionaries, least squares approximations, pattern classification, quadratic programming, support vector machines, time series, one-class classification approach, constrained quadratic optimization problem, support vector machine, dictionary, sparse approximation problem, least-square optimization problem, time series, dataset, Support vector machines, Coherence, Kernel, Training, Time series analysis, Optimization, Vectors, support vector machines, machine learning, kernel methods, one-class classification}, \n   ISSN={2373-0803}, \n}\n
\n
\n\n\n
\n The one-class classification problemis often addressed by solving a constrained quadratic optimization problem, in the same spirit as support vector machines. In this paper, we derive a novel one-class classification approach, by investigating an original sparsification criterion. This criterion, known as the coherence criterion, is based on a fundamental quantity that describes the behavior of dictionaries in sparse approximation problems. The proposed framework allows us to derive new theoretical results. We associate the coherence criterion with a one-class classification algorithm by solving a least-squares optimization problem. We also provide an adaptive updating scheme. Experiments are conducted on real datasets and time series, illustrating the relevance of our approach to existing methods in both accuracy and computational efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On simple one-class classification methods.\n \n \n \n \n\n\n \n Noumir, Z.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. IEEE International Symposium on Information Theory (ISIT), pages 2022 - 2026, MIT, Cambridge (MA), USA, 1 - 6 July 2012. \n \n\n\n\n
\n\n\n\n \n \n \"On link\n  \n \n \n \"On paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.isit,\n   author =  "Zineb Noumir and Paul Honeine and Cédric Richard",\n   title =  "On simple one-class classification methods",\n   booktitle =  "Proc. IEEE International Symposium on Information Theory (ISIT)",\n   address =  "MIT, Cambridge (MA), USA",\n   year  =  "2012",\n   month =  "1 - 6~" # jul,\n   pages={2022 - 2026},\n   doi={10.1109/ISIT.2012.6283685}, \n   ISSN={2157-8095},\n   acronym =  "ISIT",\n   url_link= "https://ieeexplore.ieee.org/document/6283685",\n   url_paper   =  "http://honeine.fr/paul/publi/12.isit.1c_distance.pdf",\n   abstract={The one-class classification has been successfully applied in many communication, signal processing, and machine learning tasks. This problem, as defined by the one-class SVM approach, consists in identifying a sphere enclosing all (or the most) of the data. The classical strategy to solve the problem considers a simultaneous estimation of both the center and the radius of the sphere. In this paper, we study the impact of separating the estimation problem. It turns out that simple one-class classification methods can be easily derived, by considering a least-squares formulation. The proposed framework allows us to derive some theoretical results, such as an upper bound on the probability of false detection. The relevance of this work is illustrated on well-known datasets.}, \n   keywords={machine learning, sparsity, adaptive filtering, one-class, cybersecurity, estimation theory, learning (artificial intelligence), least squares approximations, pattern classification, signal processing, support vector machines, one-class classification methods, communication, signal processing, machine learning tasks, SVM approach, estimation problem, least-squares formulation, Support vector machines, Kernel, Estimation, Training, Optimization, Machine learning, Mathematical model}, \n}\n
\n
\n\n\n
\n The one-class classification has been successfully applied in many communication, signal processing, and machine learning tasks. This problem, as defined by the one-class SVM approach, consists in identifying a sphere enclosing all (or the most) of the data. The classical strategy to solve the problem considers a simultaneous estimation of both the center and the radius of the sphere. In this paper, we study the impact of separating the estimation problem. It turns out that simple one-class classification methods can be easily derived, by considering a least-squares formulation. The proposed framework allows us to derive some theoretical results, such as an upper bound on the probability of false detection. The relevance of this work is illustrated on well-known datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection of contamination in water distribution network.\n \n \n \n \n\n\n \n Noumir, Z.; Guépié, B. K.; Fillatre, L.; Honeine, P.; Nikiforov, I.; Snoussi, H.; Richard, C.; Jarrige, P.; and Campan, F.\n\n\n \n\n\n\n In 2nd International Conference SimHydro: New trends in simulation hydroinformatics and 3D modeling, pages 1 - 8, Nice, France, 12-14 September 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Detection link\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.simhydro,\n   author =  "Zineb Noumir and Blaise Kévin Guépié and Lionel Fillatre and Paul Honeine and Igor Nikiforov and Hichem Snoussi and Cédric Richard and Pierre-Antoine Jarrige and Francis Campan",\n   title =  "Detection of contamination in water distribution network",\n   booktitle =  "2nd International Conference SimHydro: New trends in simulation hydroinformatics and 3D modeling",\n   address =  "Nice, France",\n   month =  "12-14~" # sep,\n   year  =  "2012",\n   pages ={1 - 8},\n   acronym =  "Hydro",\n   isbn="978-981-4451-42-0",\n   doi="10.1007/978-981-4451-42-0_12",\n   url_link= "http://link.springer.com/chapter/10.1007\\%2F978-981-4451-42-0_12",\n   abstract = "Monitoring drinking water is an important public health problem because the safe drinking water is essential for human life. Many procedures have been developed for monitoring water quality in water treatment plants for years. Monitoring of water distribution systems has received less attention. The goal of this communication is to study the problem of drinking water safety by ensuring the monitoring of the distribution network from water plant to customers. The system is based on the observation of residual chlorine concentrations which are provided by the sensor network. The complexity of the detection problem is due to the water distribution network complexity and dynamic profiles of water consumptions. The onset time and geographic location of water contamination are unknown. Its duration is also unknown but finite. Moreover, the residual chlorine concentrations, which are modified by the contamination, are also time dependent since they are functions of water consumptions Two approaches for detection are presented. The first one, namely the parametric approach, exploits the hydraulic model to compute the nominal residual chlorine concentrations. The second one, namely the nonparametric approach, is a statistical methodology exploiting historical data. Finally, the probable area of introduction of the pollutant and the propagation of the pollution are computed and displayed to operational users.",\n   keywords =  "machine learning, sparsity, adaptive filtering, one-class, cybersecurity, Drinking water, Water pollution, Parametric detection, One-class classification, Field experiment, Sensor ",\n}%   url_paper  =  "http://honeine.fr/paul/publi/12.simhydro_quality_monitoring.pdf",\n   \n
\n
\n\n\n
\n Monitoring drinking water is an important public health problem because the safe drinking water is essential for human life. Many procedures have been developed for monitoring water quality in water treatment plants for years. Monitoring of water distribution systems has received less attention. The goal of this communication is to study the problem of drinking water safety by ensuring the monitoring of the distribution network from water plant to customers. The system is based on the observation of residual chlorine concentrations which are provided by the sensor network. The complexity of the detection problem is due to the water distribution network complexity and dynamic profiles of water consumptions. The onset time and geographic location of water contamination are unknown. Its duration is also unknown but finite. Moreover, the residual chlorine concentrations, which are modified by the contamination, are also time dependent since they are functions of water consumptions Two approaches for detection are presented. The first one, namely the parametric approach, exploits the hydraulic model to compute the nominal residual chlorine concentrations. The second one, namely the nonparametric approach, is a statistical methodology exploiting historical data. Finally, the probable area of introduction of the pollutant and the propagation of the pollution are computed and displayed to operational users.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Prediction of Rain Attenuation Series based on Discretized Spectral Model.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Honeine, P.; and Tourneret, J.\n\n\n \n\n\n\n In Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 2407-2410, Munich, Germany, 22 - 27 July 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Prediction link\n  \n \n \n \"Prediction paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.igarss.rain,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine and Jean-Yves Tourneret",\n   title =  "Prediction of Rain Attenuation Series based on Discretized Spectral Model",\n   booktitle =  "Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS)",\n   address =  "Munich, Germany",\n   year  =  "2012",\n   month =  "22 - 27~" # jul,\n   pages={2407-2410}, \n   doi={10.1109/IGARSS.2012.6351006}, \n   ISSN={2153-6996},\n   acronym =  "IGARSS",\n   url_link= "https://ieeexplore.ieee.org/document/6351006",\n   url_paper   =  "http://honeine.fr/paul/publi/12.igarss.rain.pdf",\n   abstract={machine learning, adaptive filtering, Spectral model is simple and efficient for modeling the rain attenuation which occurs in satellite communication channels. The prediction of this attenuation series is a vital step for adaptive coding or adaptive power control, which can improve the efficiency of a communication system. In simulation tasks, the discretized spectral model is usually used for generating the attenuation sequence. Due to this reason, in this paper we derive the conditional probability distribution of the predicted attenuation based on the discretized spectral model. This predictor can be used as a bound for others linear or nonlinear predictor of this model.}, \n   keywords={atmospheric techniques, rain, rain attenuation series, discretized spectral model, rain attenuation model, satellite communication channels, adaptive coding, adaptive power control, communication system, attenuation sequence, conditional probability distribution, nonlinear predictor, linear predictor, Attenuation, Rain, Predictive models, Prediction algorithms, Adaptation models, Computational modeling, Least squares approximation},\n}\n
\n
\n\n\n
\n machine learning, adaptive filtering, Spectral model is simple and efficient for modeling the rain attenuation which occurs in satellite communication channels. The prediction of this attenuation series is a vital step for adaptive coding or adaptive power control, which can improve the efficiency of a communication system. In simulation tasks, the discretized spectral model is usually used for generating the attenuation sequence. Due to this reason, in this paper we derive the conditional probability distribution of the predicted attenuation based on the discretized spectral model. This predictor can be used as a bound for others linear or nonlinear predictor of this model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hyperspectral image unmixing using manifold learning: methods derivations and comparative tests.\n \n \n \n \n\n\n \n Nguyen, N. H.; Richard, C.; Honeine, P.; and Theys, C.\n\n\n \n\n\n\n In Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 3086 - 3089, Munich, Germany, 22 - 27 July 2012. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"Hyperspectral link\n  \n \n \n \"Hyperspectral paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.igarss.simplex,\n   author =  "Nguyen Hoang Nguyen and Cédric Richard and Paul Honeine and Céline Theys",\n   title =  "Hyperspectral image unmixing using manifold learning: methods derivations and comparative tests",\n   booktitle =  "Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS)",\n   address =  "Munich, Germany",\n   year  =  "2012",\n   month =  "22 - 27~" # jul,\n   organization = {IEEE},\n   pages = {3086 - 3089},\n   doi={10.1109/IGARSS.2012.6350773}, \n   ISSN={2153-6996},\n   acronym =  "IGARSS",\n   url_link= "https://ieeexplore.ieee.org/document/6350773",\n   url_paper   =  "http://honeine.fr/paul/publi/12.igarss.simplex.pdf",\n   abstract={In hyperspectral image analysis, pixels are mixtures of spectral components associated to pure materials. Although the linear mixture model is the mostly studied case, nonlinear techniques have been proposed to overcome its limitations. In this paper, a manifold learning approach is used as a dimensionality-reduction step to deal with non-linearities beforehand, or is integrated directly in the endmember extraction and abundance estimation steps using geodesic distances. Simulation results show that these methods improve the precision of estimation in severely nonlinear cases.}, \n   keywords={differential geometry, geophysical image processing, learning (artificial intelligence), nonlinear estimation, hyperspectral image unmixing analysis, manifold learning method, comparative testing, spectral component mixture, linear mixture model, nonlinear technique, dimensionality-reduction step, endmember extraction, abundance estimation step, geodesic distance, Estimation, Manifolds, Signal processing algorithms, Hyperspectral imaging, Equations, Mathematical model, Materials},    \n}\n
\n
\n\n\n
\n In hyperspectral image analysis, pixels are mixtures of spectral components associated to pure materials. Although the linear mixture model is the mostly studied case, nonlinear techniques have been proposed to overcome its limitations. In this paper, a manifold learning approach is used as a dimensionality-reduction step to deal with non-linearities beforehand, or is integrated directly in the endmember extraction and abundance estimation steps using geodesic distances. Simulation results show that these methods improve the precision of estimation in severely nonlinear cases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n VigiRes'Eau.\n \n \n \n\n\n \n Deveughèle, S.; Yin, H.; Fillatre, L.; Honeine, P.; Nikiforov, I.; Richard, C.; Snoussi, H.; Azzaoui, N.; Guépié, B. K.; and Noumir, Z.\n\n\n \n\n\n\n In Proc. 10th International Conference on Hydroinformatics, Hamburg, Germany, 14-18 July 2012. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.hydro,\n   author =  "Stéphane Deveughèle and Huan Yin and Lionel Fillatre and Paul Honeine and Igor Nikiforov and Cédric Richard and Hichem Snoussi and Nourddine Azzaoui and Blaise Kévin Guépié and Zineb Noumir",\n   title =  "VigiRes'Eau",\n   booktitle =  "Proc. 10th  International Conference on Hydroinformatics",\n   address =  "Hamburg, Germany",\n   month =  "14-18~" # jul,\n   year  =  "2012",\n   keywords =  "machine learning, sparsity, adaptive filtering, one-class, cybersecurity",\n   acronym =  "Hydro",\n}   %url_paper =  "http://honeine.fr/paul/publi/12.hydro.pdf",\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear unmixing of hyperspectral images based on multi-kernel learning.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; and Honeine, P.\n\n\n \n\n\n\n In Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), pages 1-4, Shanghai, China, 4 - 7 June 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Nonlinear link\n  \n \n \n \"Nonlinear paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.whispers.nonlinear,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine",\n   title =  "Nonlinear unmixing of hyperspectral images based on multi-kernel learning",\n   booktitle =  "Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS)",\n   address =  "Shanghai, China",\n   year  =  "2012",\n   month =  "4 - 7~" # jun,\n   acronym =  "WHISPERS",\n   pages={1-4}, \n   url_link= "https://ieeexplore.ieee.org/document/6874231",\n   url_paper  =  "http://honeine.fr/paul/publi/12.whispers.nonlinear.pdf",\n   abstract={Nonlinear unmixing of hyperspectral images has generated considerable interest among researchers, as it may overcome some inherent limitations of the linear mixing model. In this paper, we formulate the problem of estimating abundances of a nonlinear mixture of hyperspectral data based on a new multi-kernel learning paradigm. Experiments are conducted using both synthetic and real images in order to illustrate the effectiveness of the proposed method.}, \n   keywords={geophysical image processing, learning (artificial intelligence), nonlinear unmixing, hyperspectral images, linear mixing model, hyperspectral data, multikernel learning paradigm, synthetic images, real images, Abstracts, Manganese, Vectors, Hyperspectral image, nonlinear unmixing, multi-kernel learning}, \n   doi={10.1109/WHISPERS.2012.6874231}, \n   ISSN={2158-6268}, \n}\n
\n
\n\n\n
\n Nonlinear unmixing of hyperspectral images has generated considerable interest among researchers, as it may overcome some inherent limitations of the linear mixing model. In this paper, we formulate the problem of estimating abundances of a nonlinear mixture of hyperspectral data based on a new multi-kernel learning paradigm. Experiments are conducted using both synthetic and real images in order to illustrate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Indoor localization using polar intervals in wireless sensor networks.\n \n \n \n \n\n\n \n Mourad-Chehade, F.; Honeine, P.; and Snoussi, H.\n\n\n \n\n\n\n In Proc. 19th International Conference on Telecommunications (ICT), pages 1 - 6, Jounieh, Lebanon, 23 - 25 April 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Indoor link\n  \n \n \n \"Indoor paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.ict.wsn,\n   author =  "Farah Mourad-Chehade and Paul Honeine and Hichem Snoussi",\n   title =  "Indoor localization using polar intervals in wireless sensor networks",\n   booktitle =  "Proc. 19th International Conference on Telecommunications (ICT)",\n   address =  "Jounieh, Lebanon",\n   month =  "23 - 25~" # apr,\n   pages   =  "1 - 6",\n   year  =  "2012",\n   doi={10.1109/ICTEL.2012.6221222},\n   acronym =  "ICT",\n   url_link= "https://ieeexplore.ieee.org/document/6221222",\n   url_paper   =  "http://honeine.fr/paul/publi/12.ict.wsn.pdf",\n   abstract={Wireless sensor networks are networks composed of a large number of distributed sensors, connected via wireless links. This paper deals with the problem of localization in wireless sensor networks. Such a problem becomes challenging in indoor environments, where signals of Global Positioning Systems are no more reliable. In this paper, the localization problem is defined using connectivity measurements. The proposed technique consists thus of estimating unknown sensors positions using known position information of neighboring sensors. The estimation is then performed using polar intervals. The estimated positions are thus two-dimensional intervals defined in some polar coordinates system. Using intervals, the proposed approach performs an outer estimation of the solution, leading to estimates covering for sure the actual positions of the sensors.}, \n   keywords={Global Positioning System, indoor radio, radio links, wireless sensor networks, indoor localization, polar intervals, wireless sensor networks, distributed sensors, wireless links, Global Positioning Systems, two-dimensional intervals, polar coordinates system, Robot sensing systems, Wireless sensor networks, Estimation error, Accuracy, Mobile communication, connectivity measurements, localization problem, polar intervals, wireless sensor network}, \n}\n
\n
\n\n\n
\n Wireless sensor networks are networks composed of a large number of distributed sensors, connected via wireless links. This paper deals with the problem of localization in wireless sensor networks. Such a problem becomes challenging in indoor environments, where signals of Global Positioning Systems are no more reliable. In this paper, the localization problem is defined using connectivity measurements. The proposed technique consists thus of estimating unknown sensors positions using known position information of neighboring sensors. The estimation is then performed using polar intervals. The estimated positions are thus two-dimensional intervals defined in some polar coordinates system. Using intervals, the proposed approach performs an outer estimation of the solution, leading to estimates covering for sure the actual positions of the sensors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modeling electrocardiogram using Yule-Walker equations and kernel machines.\n \n \n \n \n\n\n \n Kallas, M.; Francis, C.; Honeine, P.; Amoud, H.; and Richard, C.\n\n\n \n\n\n\n In Proc. 19th International Conference on Telecommunications (ICT), pages 1-5, Jounieh, Lebanon, 23 - 25 April 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Modeling link\n  \n \n \n \"Modeling paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.ict.yulewalker,\n   author =  "Maya Kallas and Clovis Francis and Paul Honeine and Hassan Amoud and Cédric Richard",\n   title =  "Modeling electrocardiogram using Yule-Walker equations and kernel machines",\n   booktitle =  "Proc. 19th International Conference on Telecommunications (ICT)",\n   address =  "Jounieh, Lebanon",\n   month =  "23 - 25~" # apr,\n   year  =  "2012",\n   doi={10.1109/ICTEL.2012.6221217},\n   acronym =  "ICT",\n   url_link= "https://ieeexplore.ieee.org/document/6221217",\n   url_paper  =  "http://honeine.fr/paul/publi/12.ict.ecg_yulewalker.pdf",\n   pages={1-5}, \n   abstract={One may monitor the heart normal activity by analyzing the electrocardiogram. We propose in this paper to combine the principle of kernel machines, that maps data into a high dimensional feature space, with the autoregressive (AR) technique defined using the Yule-Walker equations, which predicts future samples using a combination of some previous samples. A pre-image technique is applied in order to get back to the original space in order to interpret the predicted sample. The relevance of the proposed method is illustrated on real electrocardiogram from the MIT benchmark.}, \n   keywords={machine learning, pre-image problem, adaptive filtering, autoregressive processes, electrocardiography, medical signal processing, electrocardiogram modeling, Yule-Walker equations, kernel machines principle, high dimensional feature space, autoregressive technique, preimage technique, Kernel, Mathematical model, Equations, Electrocardiography, Heart, Time series analysis, Autoregressive processes, kernel machines, ECG signals, autoregressive model, nonlinear models, pre-image problem}, \n}\n\n
\n
\n\n\n
\n One may monitor the heart normal activity by analyzing the electrocardiogram. We propose in this paper to combine the principle of kernel machines, that maps data into a high dimensional feature space, with the autoregressive (AR) technique defined using the Yule-Walker equations, which predicts future samples using a combination of some previous samples. A pre-image technique is applied in order to get back to the original space in order to interpret the predicted sample. The relevance of the proposed method is illustrated on real electrocardiogram from the MIT benchmark.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Class SVM Classification Combined with Kernel PCA Feature Extraction of ECG Signals.\n \n \n \n \n\n\n \n Kallas, M.; Francis, C.; Kanaan, L.; Merheb, D.; Honeine, P.; and Amoud, H.\n\n\n \n\n\n\n In Proc. 19th International Conference on Telecommunications (ICT), pages 1-5, Jounieh, Lebanon, 23 - 25 April 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-Class link\n  \n \n \n \"Multi-Class paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.ict.classification,\n   author =  "Maya Kallas and Clovis Francis and Lara Kanaan and Dalia Merheb and Paul Honeine and Hassan Amoud",\n   title =  "Multi-Class SVM Classification Combined with Kernel PCA Feature Extraction of ECG Signals",\n   booktitle =  "Proc. 19th International Conference on Telecommunications (ICT)",\n   address =  "Jounieh, Lebanon",\n   month =  "23 - 25~" # apr,\n   year  =  "2012",\n   doi={10.1109/ICTEL.2012.6221261},\n   keywords  =  "machine learning, multiclass",\n   acronym =  "ICT",\n   pages={1-5}, \n   url_link= "https://ieeexplore.ieee.org/document/6221261",\n   url_paper  =  "http://honeine.fr/paul/publi/12.ict.classification.pdf",\n   abstract={The cardiovascular diseases are one of the main causes of death around the world. Automatic detection and classification of electrocardiogram (ECG) signals are important for diagnosis of cardiac irregularities. This paper proposes to apply the Support Vector Machines (SVM) classification, to diagnose heartbeat abnormalities, after performing feature extraction on the ECG signals. The experiments were conducted on the ECG signals from the MIT-BIH arrhythmia database to classify two different abnormalities and normal beats. Kernel Principal Component Analysis (KPCA) is used for feature extraction since it performs better than PCA on ECG signals due to their nonlinear structures. This is demonstrated in a previous work. Two multi-SVM classification schemes are used, One-Against-One (OAO) and One-Against-All (OAA), to classify the ECG signals into different disease categories. The experiments conducted show that SVM combined with KPCA performs better than that without feature extraction. Moreover, our results show a better performance in Gaussian KPCA feature extraction with respect to other kernels. Furthermore, the performance of Gaussian OAA-SVM combined with KPCA has higher average accuracy than Gaussian OAA-SVM in ECG classification.}, \n   keywords={electrocardiography, feature extraction, medical diagnostic computing, medical signal detection, principal component analysis, signal classification, support vector machines, multiclass SVM classification, kernel PCA feature extraction, ECG signals, cardiovascular diseases, automatic electrocardiogram signal detection, automatic electrocardiogram signal classification, cardiac irregularity diagnosis, support vector machines classification, heartbeat abnormality diagnosis, MIT-BIH arrhythmia database, normal beats, kernel principal component analysis, nonlinear structures, multiSVM classification schemes, one-against-one classification, OAO classification, one-against-all classification, OAA classification, disease category, Gaussian KPCA feature extraction, Gaussian OAA-SVM, average accuracy, ECG classification, Support vector machines, Kernel, Principal component analysis, Electrocardiography, Feature extraction, Accuracy, Heart beat, kernel machines, ECG signals, Kernel Principal Component Analysis, Support Vector Machines, Multi-class classification}, \n}\n\n
\n
\n\n\n
\n The cardiovascular diseases are one of the main causes of death around the world. Automatic detection and classification of electrocardiogram (ECG) signals are important for diagnosis of cardiac irregularities. This paper proposes to apply the Support Vector Machines (SVM) classification, to diagnose heartbeat abnormalities, after performing feature extraction on the ECG signals. The experiments were conducted on the ECG signals from the MIT-BIH arrhythmia database to classify two different abnormalities and normal beats. Kernel Principal Component Analysis (KPCA) is used for feature extraction since it performs better than PCA on ECG signals due to their nonlinear structures. This is demonstrated in a previous work. Two multi-SVM classification schemes are used, One-Against-One (OAO) and One-Against-All (OAA), to classify the ECG signals into different disease categories. The experiments conducted show that SVM combined with KPCA performs better than that without feature extraction. Moreover, our results show a better performance in Gaussian KPCA feature extraction with respect to other kernels. Furthermore, the performance of Gaussian OAA-SVM combined with KPCA has higher average accuracy than Gaussian OAA-SVM in ECG classification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Prediction of time series using yule-walker equations with kernels.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Richard, C.; Francis, C.; and Amoud, H.\n\n\n \n\n\n\n In Proc. 37th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2185 - 2188, Kyoto, Japan, 25 - 30 March 2012. \n \n\n\n\n
\n\n\n\n \n \n \"Prediction link\n  \n \n \n \"Prediction paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.icassp.yulewalker,\n   author =  "Maya Kallas and Paul Honeine and Cédric Richard and Clovis Francis and Hassan Amoud",\n   title =  "Prediction of time series using yule-walker equations with kernels",\n   booktitle =  "Proc. 37th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Kyoto, Japan",\n   month =  "25 - 30~" # mar,\n   year  =  "2012",\n   pages={2185 - 2188}, \n   doi={10.1109/ICASSP.2012.6288346}, \n   ISSN={1520-6149},\n   acronym =  "ICASSP",\n   url_link= "https://ieeexplore.ieee.org/document/6288346",\n   url_paper   =  "http://honeine.fr/paul/publi/12.icassp.yulewalker.pdf",\n   abstract={The autoregressive (AR) model is a well-known technique to analyze time series. The Yule-Walker equations provide a straightforward connection between the AR model parameters and the covariance function of the process. In this paper, we propose a nonlinear extension of the AR model using kernel machines. To this end, we explore the Yule-Walker equations in the feature space, and show that the model parameters can be estimated using the concept of expected kernels. Finally, in order to predict once the model identified, we solve a pre-image problem by getting back from the feature space to the input space. We also give new insights into the convexity of the pre-image problem. The relevance of the proposed method is evaluated on several time series.}, \n   keywords={machine learning, pre-image problem, adaptive filtering, autoregressive processes, covariance analysis, prediction theory, time series, time series prediction, Yule-Walker equation, autoregressive model nonlinear extension, covariance function, kernel machine, feature space, model parameter, preimage problem convexity, Mathematical model, Kernel, Time series analysis, Equations, Predictive models, Support vector machines, Signal processing, autoregressive model, Yule-Walker equations, expected kernels, pre-image problem, nonlinear model}, \n}\n
\n
\n\n\n
\n The autoregressive (AR) model is a well-known technique to analyze time series. The Yule-Walker equations provide a straightforward connection between the AR model parameters and the covariance function of the process. In this paper, we propose a nonlinear extension of the AR model using kernel machines. To this end, we explore the Yule-Walker equations in the feature space, and show that the model parameters can be estimated using the concept of expected kernels. Finally, in order to predict once the model identified, we solve a pre-image problem by getting back from the feature space to the input space. We also give new insights into the convexity of the pre-image problem. The relevance of the proposed method is evaluated on several time series.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A gaussian process regression approach for testing Granger causality between time series data.\n \n \n \n \n\n\n \n Amblard, P.; Michel, O. J.; Richard, C.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 37th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3357 - 3360, Kyoto, Japan, 25 - 30 March 2012. \n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.icassp.Granger,\n   author =  "Pierre-Olivier Amblard and Olivier J.J. Michel and Cédric Richard and Paul Honeine",\n   title =  "A gaussian process regression approach for testing Granger causality between time series data",\n   booktitle =  "Proc. 37th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Kyoto, Japan",\n   month =  "25 - 30~" # mar,\n   year  =  "2012",\n   pages={3357 - 3360}, \n   doi={10.1109/ICASSP.2012.6288635}, \n   ISSN={1520-6149},\n   keywords  =  "machine learning",\n   acronym =  "ICASSP",\n   url_link= "https://ieeexplore.ieee.org/document/6288635",\n   url_paper   =  "http://honeine.fr/paul/publi/12.icassp.Granger.pdf",\n   abstract={Granger causality considers the question of whether two time series exert causal influences on each other. Causality testing usually relies on prediction, i.e., if the prediction error of the first time series is reduced by taking measurements from the second one into account, then the latter is said to have a causal influence on the former. In this paper, a nonparametric framework based on functional estimation is proposed. Nonlinear prediction is performed via the Bayesian paradigm, using Gaussian processes. Some experiments illustrate the efficiency of the approach.}, \n   keywords={Bayes methods, causality, regression analysis, signal processing, time series, Gaussian process regression approach, Granger causality, time series data, causality testing, functional estimation, nonlinear prediction, Bayesian paradigm, Time series analysis, Vectors, Gaussian processes, Mathematical model, Covariance matrix, Testing, Noise, Granger causality, functional estimation, Gaussian process, reproducing kernel}, \n}\n
\n
\n\n\n
\n Granger causality considers the question of whether two time series exert causal influences on each other. Causality testing usually relies on prediction, i.e., if the prediction error of the first time series is reduced by taking measurements from the second one into account, then the latter is said to have a causal influence on the former. In this paper, a nonparametric framework based on functional estimation is proposed. Nonlinear prediction is performed via the Bayesian paradigm, using Gaussian processes. Some experiments illustrate the efficiency of the approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n VigiRes'Eau : Surveiller un réseau de distribution d'eau potable.\n \n \n \n \n\n\n \n Yin, H.; Campan, F.; Guépié, B. K.; Noumir, Z.; Fillatre, L.; Honeine, P.; Nikiforov, I.; Richard, C.; Snoussi, H.; Jarrige, P.; and Morio, C.\n\n\n \n\n\n\n In Workshop Interdisciplinaire sur la Sécurité Globale (WISG'12), (ANR - CSOSG), pages 1-8, Troyes, France, 2012. \n \n\n\n\n
\n\n\n\n \n \n \"VigiRes'Eau paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{12.wisg.vigireseau,\n   author =  "Huan Yin and Francis Campan and Blaise Kévin Guépié and Zineb Noumir and Lionel Fillatre and Paul Honeine and Igor Nikiforov and Cédric Richard and Hichem Snoussi and Pierre-Antoine Jarrige and Cédric Morio",\n   title =  "VigiRes'Eau : Surveiller un réseau de distribution d'eau potable",\n   booktitle =  "Workshop Interdisciplinaire sur la Sécurité Globale (WISG'12), (ANR - CSOSG)",\n   address =  "Troyes, France",\n   year =  "2012",\n   pages =  "1-8",\n   acronym =  "WISG",\n   keywords =  "non-stationarity, adaptive filtering, machine learning, one-class, cybersecurity",\n   url_paper  =  "http://www.honeine.fr/paul/publi/12.wisg.vigireseau.pdf",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive Least-Squares One-Class Machines.\n \n \n \n \n\n\n \n Noumir, Z.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n Technical Report UTT-ICD-2012-3-31, Université de technologie de Troyes, Troyes, France, March 2012.\n \n\n\n\n
\n\n\n\n \n \n \"Adaptive link\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@TECHREPORT{12.techreport.one-class,\n   author =        "Zineb Noumir and Paul Honeine and Cédric Richard",\n   title =         {Adaptive Least-Squares One-Class Machines},\n   number =        {UTT-ICD-2012-3-31},\n   institution =   {Université de technologie de Troyes},\n   address =       {Troyes, France},\n   keywords =  "machine learning, sparsity, adaptive filtering, one-class, cybersecurity",\n   month = mar,\n   year  =         {2012},\n   url_link=   {http://honeine.fr/paul/},\n   contact = {paul.honeine@utt.fr},\n   pages =         {1 - 12},\n  abstract =      {In this paper, we derive an adaptive one-class classification algorithm. We propose a least-squares formulation of the problem, where the model complexity is controlled by a parsimony criterion. We consider the linear approximation criterion, and we couple it with a simple adaptive updating algorithm for online learning. We conduct experiments on synthetic datasets and real time series, and illustrate the relevance of the proposed method over existing methods, and show its low computational cost.},\n}\n
\n
\n\n\n
\n In this paper, we derive an adaptive one-class classification algorithm. We propose a least-squares formulation of the problem, where the model complexity is controlled by a parsimony criterion. We consider the linear approximation criterion, and we couple it with a simple adaptive updating algorithm for online learning. We conduct experiments on synthetic datasets and real time series, and illustrate the relevance of the proposed method over existing methods, and show its low computational cost.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2011\n \n \n (24)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n A closed-form solution for the pre-image problem in kernel-based machines.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n Journal of Signal Processing Systems, 65(3): 289 - 299. December 2011.\n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{11.mlsp.journal,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "A closed-form solution for the pre-image problem in kernel-based machines",\n   journal =  "Journal of Signal Processing Systems",\n   year  =  "2011",\n   number =  "3",\n   volume =  "65",\n   month =  dec,\n   pages =  "289 - 299",\n   publisher =  "Springer",\n   address =  "New York, USA",\n   doi = "10.1007/s11265-010-0482-9",\n   url_link= "https://link.springer.com/article/10.1007/s11265-010-0482-9",\n   url_paper= "http://www.honeine.fr/paul/publi/11.mlsp.journal.pdf",\n   keywords  =  "machine learning, pre-image problem, kernel-based machines, pre-image problem, linear algebra, kernel-PCA, nonlinear denoising ",\n   abstract = "The pre-image problem is a challenging research subject pursued by many researchers in machine learning. Kernel-based machines seek some relevant feature in a reproducing kernel Hilbert space (RKHS), optimized in a given sense, such as kernel-PCA algorithms. Operating the latter for denoising requires solving the pre-image problem, i.e. estimating a pattern in the input space whose image in the RKHS is approximately a given feature. Solving the pre-image problem is pioneered by Mika's fixed-point iterative optimization technique. Recent approaches take advantage of prior knowledge provided by the training data, whose coordinates are known in the input space and implicitly in the RKHS, a first step in this direction made by Kwok's algorithm based on multidimensional scaling (MDS). Using such prior knowledge, we propose in this paper a new technique to learn the pre-image, with the elegance that only linear algebra is involved. This is achieved by establishing a coordinate system in the RKHS with an isometry with the input space, i.e. the inner products of training data are preserved using both representations. We suggest representing any feature in this coordinate system, which gives us information regarding its pre-image in the input space. We show that this approach provides a natural pre-image technique in kernel-based machines since, on one hand it involves only linear algebra operations, and on the other it can be written directly using the kernel values, without the need to evaluate distances as with the MDS approach. The performance of the proposed approach is illustrated for denoising with kernel-PCA, and compared to state-of-the-art methods on both synthetic datasets and real data handwritten digits."\n}\n
\n
\n\n\n
\n The pre-image problem is a challenging research subject pursued by many researchers in machine learning. Kernel-based machines seek some relevant feature in a reproducing kernel Hilbert space (RKHS), optimized in a given sense, such as kernel-PCA algorithms. Operating the latter for denoising requires solving the pre-image problem, i.e. estimating a pattern in the input space whose image in the RKHS is approximately a given feature. Solving the pre-image problem is pioneered by Mika's fixed-point iterative optimization technique. Recent approaches take advantage of prior knowledge provided by the training data, whose coordinates are known in the input space and implicitly in the RKHS, a first step in this direction made by Kwok's algorithm based on multidimensional scaling (MDS). Using such prior knowledge, we propose in this paper a new technique to learn the pre-image, with the elegance that only linear algebra is involved. This is achieved by establishing a coordinate system in the RKHS with an isometry with the input space, i.e. the inner products of training data are preserved using both representations. We suggest representing any feature in this coordinate system, which gives us information regarding its pre-image in the input space. We show that this approach provides a natural pre-image technique in kernel-based machines since, on one hand it involves only linear algebra operations, and on the other it can be written directly using the kernel values, without the need to evaluate distances as with the MDS approach. The performance of the proposed approach is illustrated for denoising with kernel-PCA, and compared to state-of-the-art methods on both synthetic datasets and real data handwritten digits.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-negative least-mean-square algorithm.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Bermudez, J. C. M.; and Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Signal Processing, 59(11): 5225 - 5235. November 2011.\n \n\n\n\n
\n\n\n\n \n \n \"Non-negative link\n  \n \n \n \"Non-negative paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{11.tsp.nnlms,\n   author =  "Jie Chen and Cédric Richard and José C. M. Bermudez and Paul Honeine",\n   title =  "Non-negative least-mean-square algorithm",\n   journal =  "IEEE Transactions on Signal Processing",\n   year  =  "2011",\n   volume =  "59",\n   number =  "11",\n   pages =  "5225 - 5235",\n   month =  nov,\n   url_link="https://ieeexplore.ieee.org/document/5958632",\n   doi="10.1109/TSP.2011.2162508", \n   url_paper  =  "http://www.honeine.fr/paul/publi/11.tsp.nnlms.pdf",\n   keywords  =  "non-negativity, adaptive filtering, gradient methods, least mean squares methods, parameter estimation, signal processing, dynamic system modeling, nonstationary signal processing, parameter estimation, system identification, nonnegativity constraint, nonnegative least-mean-square algorithm, stochastic gradient descent, Least squares approximation, Algorithm design and analysis, Convergence, Equations, Prediction algorithms, Facsimile, Mathematical model, Adaptive filters, adaptive signal processing, least mean square algorithms, nonnegative constraints, transient analysis",\n   abstract={Dynamic system modeling plays a crucial role in the development of techniques for stationary and nonstationary signal processing. Due to the inherent physical characteristics of systems under investigation, nonnegativity is a desired constraint that can usually be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under nonnegativity constraints. We derive the so-called nonnegative least-mean-square algorithm (NNLMS) based on stochastic gradient descent, and we analyze its convergence. Experiments are conducted to illustrate the performance of this approach and consistency with the analysis.}, \n}\n
\n
\n\n\n
\n Dynamic system modeling plays a crucial role in the development of techniques for stationary and nonstationary signal processing. Due to the inherent physical characteristics of systems under investigation, nonnegativity is a desired constraint that can usually be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under nonnegativity constraints. We derive the so-called nonnegative least-mean-square algorithm (NNLMS) based on stochastic gradient descent, and we analyze its convergence. Experiments are conducted to illustrate the performance of this approach and consistency with the analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Preimage problem in kernel-based machine learning.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n IEEE Signal Processing Magazine, 28(2): 77 - 88. March 2011.\n \n\n\n\n
\n\n\n\n \n \n \"Preimage link\n  \n \n \n \"Preimage paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{11.spm,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "Preimage problem in kernel-based machine learning",\n   journal =  "IEEE Signal Processing Magazine",\n   volume = "28",\n   number = "2",\n   pages =  "77 - 88",\n   month =  mar,\n   year  =  "2011",\n   url_link= "https://ieeexplore.ieee.org/document/5714388",\n   doi="10.1109/MSP.2010.939747", \n   url_paper  =  "http://www.honeine.fr/paul/publi/11.spm.pdf",\n   keywords =  "machine learning, pre-image problem, wireless sensor networks, learning (artificial intelligence), principal component analysis, preimage problem, kernel-based machine learning, nonlinear mapping, kernel methods, reverse mapping, principal component analysis, dimensionality-reduction problem, Kernel, Principal component analysis, Machine learning, Noise reduction, Optimization, Classification algorithms, Signal processing algorithms",\n   abstract={While the nonlinear mapping from the input space to the feature space is central in kernel methods, the reverse mapping from the feature space back to the input space is also of primary interest. This is the case in many applications, including kernel principal component analysis (PCA) for signal and image denoising. Unfortunately, it turns out that the reverse mapping generally does not exist and only a few elements in the feature space have a valid preimage in the input space. The preimage problem consists of finding an approximate solution by identifying data in the input space based on their corresponding features in the high dimensional feature space. It is essentially a dimensionality-reduction problem, and both have been intimately connected in their historical evolution, as studied in this article.}, \n}\n
\n
\n\n\n
\n While the nonlinear mapping from the input space to the feature space is central in kernel methods, the reverse mapping from the feature space back to the input space is also of primary interest. This is the case in many applications, including kernel principal component analysis (PCA) for signal and image denoising. Unfortunately, it turns out that the reverse mapping generally does not exist and only a few elements in the feature space have a valid preimage in the input space. The preimage problem consists of finding an approximate solution by identifying data in the input space based on their corresponding features in the high dimensional feature space. It is essentially a dimensionality-reduction problem, and both have been intimately connected in their historical evolution, as studied in this article.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stationnarité relative et approches connexes.\n \n \n \n \n\n\n \n Flandrin, P.; Richard, C.; Amblard, P.; Borgnat, P.; Honeine, P.; Amoud, H.; Ferrari, A.; Xiao, J.; Moghtaderi, A.; and Ramirez-Cobo, P.\n\n\n \n\n\n\n Traitement du signal, 28(6): 691 - 716. 2011.\n \n\n\n\n
\n\n\n\n \n \n \"Stationnarité paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{11.ts,\n   author =  "Patrick Flandrin and Cédric Richard and Pierre-Olivier Amblard and Pierre Borgnat and Paul Honeine and Hassan Amoud and André Ferrari and Jun Xiao and Azadeh Moghtaderi and Pepa Ramirez-Cobo",\n   title =  "Stationnarité relative et approches connexes",\n   journal =  "Traitement du signal",\n   year  =  "2011",\n   volume =  "28",\n   number =  "6",\n   pages  =  "691 - 716",\n   doi = "10.3166/ts.28.691-716",\n   url_paper   =  "http://www.honeine.fr/paul/publi/11.ts.pdf",\n   keywords =  " stationarity, test, time-frequency, spectral distances, learning, self-similarity, stationarity, test, time-frequency, spectral distances, learning, self-similarity",\n   abstract = "The paper is concerned with the approach developed within the ANR Project StaRAC, and it gives an overview of its main results. The objective was to reconsider the concept of stationarity so as to make it operational, allowing for both an interpretation relatively to an observation scale and the possibility of its testing thanks to the use of time-frequency surrogates, as well as to offer various extensions, especially beyond shift invariance.",\n}   \n\n
\n
\n\n\n
\n The paper is concerned with the approach developed within the ANR Project StaRAC, and it gives an overview of its main results. The objective was to reconsider the concept of stationarity so as to make it operational, allowing for both an interpretation relatively to an observation scale and the possibility of its testing thanks to the use of time-frequency surrogates, as well as to offer various extensions, especially beyond shift invariance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel kernel-based nonlinear unmixing scheme of hyperspectral images.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 45th Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pages 1898-1902, Pacific Grove (CA), USA, 6 - 9 November 2011. IEEE\n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n \n \"A code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 5 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.asilomar.hype_nonlin,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine",\n   title =  "A novel kernel-based nonlinear unmixing scheme of hyperspectral images",\n   booktitle =  "Proc. 45th Asilomar Conference on Signals, Systems and Computers (ASILOMAR)",\n   address =  "Pacific Grove (CA), USA",\n   year  =  "2011",\n   month =  "6 - 9~" # nov,\n   organization = {IEEE},\n   pages = {1898-1902},\n   acronym =  "Asilomar",\n   url_link= "https://ieeexplore.ieee.org/document/6190353",\n   url_paper  =  "http://honeine.fr/paul/publi/11.asilomar.hype_nonlin.pdf",\n   url_code = "http://honeine.fr/paul/publi/11.asilomar.hype_nonlin.m",\n   Abstract = {In hyperspectral images, pixels are mixtures of spectral components associated to pure materials. Although the linear mixture model is the most studied case, nonlinear models have been taken into consideration to overcome some limitations of the linear model. In this paper, nonlinear hyperspectral unmixing problem is studied through kernel-based learning theory. Endmember components at each band are mapped implicitly in a high feature space, in order to address the nonlinear interaction of photons. Experiment results with both synthetic and real images illustrate the effectiveness of the proposed scheme.},\n   keywords={geophysical image processing, learning (artificial intelligence), kernel-based nonlinear unmixing scheme, hyperspectral images, spectral components, pixels, linear mixture model, nonlinear hyperspectral unmixing problem, kernel-based learning theory, endmember components, photons, Kernel, Materials, Hyperspectral imaging, Signal processing algorithms, Vectors, Algorithm design and analysis}, \n   doi={10.1109/ACSSC.2011.6190353}, \n   ISSN={1058-6393}, \n}\n
\n
\n\n\n
\n In hyperspectral images, pixels are mixtures of spectral components associated to pure materials. Although the linear mixture model is the most studied case, nonlinear models have been taken into consideration to overcome some limitations of the linear model. In this paper, nonlinear hyperspectral unmixing problem is studied through kernel-based learning theory. Endmember components at each band are mapped implicitly in a high feature space, in order to address the nonlinear interaction of photons. Experiment results with both synthetic and real images illustrate the effectiveness of the proposed scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Modified Non-Negative LMS Algorithm and its Stochastic Behavior Analysis.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Bermudez, J. C. M.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 45th Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pages 542-546, Pacific Grove (CA), USA, 6 - 9 November 2011. \n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n \n \"A code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.asilomar.nnlms,\n   author =  "Jie Chen and Cédric Richard and José C. M. Bermudez and Paul Honeine",\n   title =  "A Modified Non-Negative LMS Algorithm and its Stochastic Behavior Analysis",\n   booktitle =  "Proc. 45th Asilomar Conference on Signals, Systems and Computers (ASILOMAR)",\n   address =  "Pacific Grove (CA), USA",\n   year  =  "2011",\n   month =  "6 - 9~" # nov,\n   pages={542-546}, \n   doi={10.1109/ACSSC.2011.6190060}, \n   ISSN={1058-6393},\n   url_link= "https://ieeexplore.ieee.org/document/6190060",\n   url_paper   =  "http://honeine.fr/paul/publi/11.asilomar.nnlms.pdf",\n   url_code   =  "http://honeine.fr/paul/publi/11.asilomar.nnlms.m",\n   acronym =  "Asilomar",\n   abstract={In hyperspectral images, pixels are mixtures of spectral components associated to pure materials. Although the linear mixture model is the most studied case, nonlinear models have been taken into consideration to overcome some limitations of the linear model. In this paper, nonlinear hyperspectral unmixing problem is studied through kernel-based learning theory. Endmember components at each band are mapped implicitly in a high feature space, in order to address the nonlinear interaction of photons. Experiment results with both synthetic and real images illustrate the effectiveness of the proposed scheme.}, \n   keywords={non-negativity, adaptive filtering, feature extraction, learning (artificial intelligence), least mean squares methods, stochastic processes, modified nonnegative LMS algorithm, stochastic behavior analysis, hyperspectral image, spectral components, nonlinear hyperspectral unmixing problem, kernel-based learning theory, end member components, feature space, nonlinear interaction, synthetic images, real images, Convergence, Signal processing algorithms, Mathematical model, Equations, Vectors, Stochastic processes, Approximation methods}, \n}\n
\n
\n\n\n
\n In hyperspectral images, pixels are mixtures of spectral components associated to pure materials. Although the linear mixture model is the most studied case, nonlinear models have been taken into consideration to overcome some limitations of the linear model. In this paper, nonlinear hyperspectral unmixing problem is studied through kernel-based learning theory. Endmember components at each band are mapped implicitly in a high feature space, in order to address the nonlinear interaction of photons. Experiment results with both synthetic and real images illustrate the effectiveness of the proposed scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Comparative Study Of Pre-Image Techniques: The Kernel Autoregressive Case.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Francis, C.; and Amoud, H.\n\n\n \n\n\n\n In Proc. IEEE workshop on Signal Processing Systems (SiPS), pages 379 - 384, Beirut, Lebanon, 4 - 7 October 2011. \n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.sips.kAR,\n   author =  "Maya Kallas and Paul Honeine and Clovis Francis and Hassan Amoud",\n   title =  "A Comparative Study Of Pre-Image Techniques: The Kernel Autoregressive Case",\n   booktitle =  "Proc. IEEE workshop on Signal Processing Systems (SiPS)",\n   address =  "Beirut, Lebanon",\n   year  =  "2011",\n   month =  "4 - 7~" # oct,\n   pages =  "379 - 384",\n   acronym =  "SiPS",\n   url_link= "https://ieeexplore.ieee.org/document/6089006",\n   url_paper   =  "http://honeine.fr/paul/publi/11.SiPS.kAR.pdf",\n   abstract={The autoregressive (AR) model is one of the most used techniques for time series analysis, applied to study stationary as well as non-stationary processes. However, being a linear technique, it is not adapted for nonlinear systems. Recently, we introduced the kernel AR model, a straightforward extension of the AR model to the nonlinear case. It is based on the concept of kernel machines, where data are nonlinearly mapped from the input space to a feature space. The AR model can thus be applied on the mapped data. Nevertheless, in order to predict future samples, one needs to go back to the input space, by solving the pre-image problem. The prediction performance highly depends on the considered pre-image technique. In this paper, a comparative study of several state-of-the-art pre-image techniques is conducted for the kernel AR model, investigating the prediction error with the optimal model parameters, as well as the computational complexity. The conformal map approach presents results as good as the well known fixed-point iterative method, with less computational time. This is shown on unidimensional and multidimensional chaotic time series.}, \n   keywords={machine learning, pre-image problem, autoregressive processes, computational complexity, image processing, time series, pre-image technique, kernel autoregressive model, time series analysis, nonstationary process, nonlinear system, kernel machines, optimal model parameter, computational complexity, conformal map, fixed-point iterative method, unidimensional chaotic time series, multidimensional chaotic time series, Kernel, Predictive models, Time series analysis, Computational modeling, Mathematical model, Adaptation models, Polynomials, kernel machines, autoregressive model, nonlinear models, pre-image problem, prediction}, \n   doi={10.1109/SiPS.2011.6089006}, \n   ISSN={2162-3570}, \n}\n
\n
\n\n\n
\n The autoregressive (AR) model is one of the most used techniques for time series analysis, applied to study stationary as well as non-stationary processes. However, being a linear technique, it is not adapted for nonlinear systems. Recently, we introduced the kernel AR model, a straightforward extension of the AR model to the nonlinear case. It is based on the concept of kernel machines, where data are nonlinearly mapped from the input space to a feature space. The AR model can thus be applied on the mapped data. Nevertheless, in order to predict future samples, one needs to go back to the input space, by solving the pre-image problem. The prediction performance highly depends on the considered pre-image technique. In this paper, a comparative study of several state-of-the-art pre-image techniques is conducted for the kernel AR model, investigating the prediction error with the optimal model parameters, as well as the computational complexity. The conformal map approach presents results as good as the well known fixed-point iterative method, with less computational time. This is shown on unidimensional and multidimensional chaotic time series.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n PCA And KPCA Of ECG Signals With Binary SVM Classification.\n \n \n \n \n\n\n \n Kanaan, L.; Merheb, D.; Kallas, M.; Francis, C.; Amoud, H.; and Honeine, P.\n\n\n \n\n\n\n In Proc. IEEE workshop on Signal Processing Systems (SiPS), pages 344 - 348, Beirut, Lebanon, 4 - 7 October 2011. \n \n\n\n\n
\n\n\n\n \n \n \"PCA link\n  \n \n \n \"PCA paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.sips.kPCA,\n   author =  "Lara Kanaan and Dalia Merheb and Maya Kallas and Clovis Francis and Hassan Amoud and Paul Honeine",\n   title =  "PCA And KPCA Of ECG Signals With Binary SVM Classification",\n   booktitle =  "Proc. IEEE workshop on Signal Processing Systems (SiPS)",\n   address =  "Beirut, Lebanon",\n   year  =  "2011",\n   month =  "4 - 7~" # oct,\n   pages =  "344 - 348",\n   keywords  =  "machine learning, multiclass",\n   acronym =  "SiPS",\n   url_link= "https://ieeexplore.ieee.org/document/6089000",\n   url_paper   =  "http://honeine.fr/paul/publi/11.SiPS.kPCA.pdf",\n   abstract={Cardiac problems are the main reason of people's death nowadays. However, one way that light save the life is the analysis of the an electrocardiograph. This analysis consist in the diagnosis of the arrhythmia when it presents. In this paper, we propose to combine the Support Vector Machines used in classification on one hand, with the Principal Component Analysis used in order to reduce the size of the data by choosing some axes that capture the most variance between data and on the other hand, with the kernel principal component analysis where a mapping to a high dimensional space is needed to capture the most relevant axes but for nonlinear separable data. The efficiency of the proposed SVM classification is illustrated on real electrocardiogram dataset taken from MIT-BIH Arrhythmia Database.}, \n   keywords={diseases, electrocardiography, medical signal processing, patient diagnosis, pattern classification, principal component analysis, support vector machines, KPCA, ECG signal, binary SVM classification, cardiac problem, electrocardiograph, patient diagnosis, support vector machine, kernel principal component analysis, nonlinear separable data, MIT-BIH arrhythmia database, Support vector machines, Principal component analysis, Kernel, Feature extraction, Electrocardiography, Accuracy, Sensitivity, ECG signals, PCA, Kernel PCA, SVM classification}, \n   doi={10.1109/SiPS.2011.6089000}, \n   ISSN={2162-3570}, \n}\n
\n
\n\n\n
\n Cardiac problems are the main reason of people's death nowadays. However, one way that light save the life is the analysis of the an electrocardiograph. This analysis consist in the diagnosis of the arrhythmia when it presents. In this paper, we propose to combine the Support Vector Machines used in classification on one hand, with the Principal Component Analysis used in order to reduce the size of the data by choosing some axes that capture the most variance between data and on the other hand, with the kernel principal component analysis where a mapping to a high dimensional space is needed to capture the most relevant axes but for nonlinear separable data. The efficiency of the proposed SVM classification is illustrated on real electrocardiogram dataset taken from MIT-BIH Arrhythmia Database.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Approches géométriques pour l'estimation des fractions d'abondance en traitement de données hyperspectales.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images, Bordeaux, France, September 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Approches paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.gretsi.hype,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "Approches géométriques pour l'estimation des fractions d'abondance en traitement de données hyperspectales",\n   booktitle =  "Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Bordeaux, France",\n   year  =  "2011",\n   month =  sep,\n   keywords  =  "hyperspectral",\n   acronym =  "GRETSI'11",\n   url_paper   =  "http://honeine.fr/paul/publi/11.gretsi.hype.pdf",\n   abstract = "In hyperspectral image unmixing, a collection of pure spectra, the so-called endmembers, is identified and their abundance fractions are estimated at each pixel. While endmembers are often extracted using a geometric approach, the abundances are usually estimated using a least-squares approach by solving an inverse problem. In this paper, we tackle the problem of abundance estimation by using a geometric point of view. The proposed framework shows that a large number of endmember extraction techniques can be adapted to jointly estimate the abundance fractions, with essentially no additional computational complexity. This is illustrated in this paper with the N-Findr, SGA, VCA, OSP, and ICE endmember extraction techniques.",\n      x-abstract_fr = "De nombreuses études ont récemment montré l'avantage de l'approche géométrique en démélange de données hyperspectrales. Elle permet d'identifier les signatures spectrales des composants purs. Jusqu'ici, l'estimation des fractions d'abondance a toujours été réalisée dans un second temps, par résolution d'un problème inverse généralement. Dans cet article, nous montrons que les techniques géométriques d'extraction des composants purs de la littérature permettent d'estimer conjointement les fractions d'abondance, pour un coût calculatoire supplémentaire négligeable. Pour ce faire, un socle commun d'interprétations géométriques du problème est proposé, que l'on peut décliner pour mieux l'adapter à la technique d'extraction de composants purs retenue. Le caractère géométrique de l'approche proposée lui confère une flexibilité très appréciable dans le cadre de techniques de démélange géométrique, illustrée ici avec N-Findr, SGA, VCA, OSP et ICE.",\n}\n
\n
\n\n\n
\n In hyperspectral image unmixing, a collection of pure spectra, the so-called endmembers, is identified and their abundance fractions are estimated at each pixel. While endmembers are often extracted using a geometric approach, the abundances are usually estimated using a least-squares approach by solving an inverse problem. In this paper, we tackle the problem of abundance estimation by using a geometric point of view. The proposed framework shows that a large number of endmember extraction techniques can be adapted to jointly estimate the abundance fractions, with essentially no additional computational complexity. This is illustrated in this paper with the N-Findr, SGA, VCA, OSP, and ICE endmember extraction techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification multi-classes au prix d'un classifieur binaire.\n \n \n \n \n\n\n \n Noumir, Z.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images, Bordeaux, France, September 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Classification paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.gretsi.multiclass,\n   author =  "Zineb Noumir and Paul Honeine and Cédric Richard",\n   title =  "Classification multi-classes au prix d'un classifieur binaire",\n   booktitle =  "Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Bordeaux, France",\n   year  =  "2011",\n   month =  sep,\n   keywords  =  "machine learning, multiclass",\n   acronym =  "GRETSI'11",\n   url_paper   =  "http://honeine.fr/paul/publi/11.gretsi.multiclass.pdf",\n   abstract = "This paper deals with the problem of multi-class classification in machine learning. Various techniques have been successfully proposed to solve such problems, with a computation cost often much higher than techniques dedicated to binary classification. To address this problem, we propose a novel formulation for designing multi-class classifiers, with essentially the same computational complexity as binary classifiers. The proposed approach provides a framework to develop multi-class algorithms using the same optimization routines as those already available for binary classification tasks. The effectiveness of our approach is illustrated with Support Vector Machines (SVM), Least-Squares SVM (LS-SVM), and Regularized Least Squares Classification (RLSC).",\n   x-abstract_fr = "Cet article traite du problème de classification multi-classe en reconnaissance des formes. La résolution de ce type de problèmes nécessite des algorithmes au coût calculatoire souvent beaucoup plus élevé que les méthodes d'apprentissage dédiées à la classification binaire. On propose dans cet article une nouvelle formulation pour la conception de classifieurs multi-classes, nécessitant essentiellement la même complexité calculatoire que l'apprentissage d'un classifieur binaire. On montre que ce socle commun offre un cadre pour élaborer des algorithmes multi-classes en utilisant les mêmes routines d'optimisation que celles utilisées pour les problèmes de classification binaire. On illustre ce résultat avec les algorithmes SVM, LS-SVM et RLSC.",\n}\n
\n
\n\n\n
\n This paper deals with the problem of multi-class classification in machine learning. Various techniques have been successfully proposed to solve such problems, with a computation cost often much higher than techniques dedicated to binary classification. To address this problem, we propose a novel formulation for designing multi-class classifiers, with essentially the same computational complexity as binary classifiers. The proposed approach provides a framework to develop multi-class algorithms using the same optimization routines as those already available for binary classification tasks. The effectiveness of our approach is illustrated with Support Vector Machines (SVM), Least-Squares SVM (LS-SVM), and Regularized Least Squares Classification (RLSC).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modèle autorégressif non-linéaire à noyau. Une première approche.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Richard, C.; Francis, C.; and Amoud, H.\n\n\n \n\n\n\n In Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images, Bordeaux, France, September 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Modèle paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.gretsi.kAR,\n   author =  "Maya Kallas and Paul Honeine and Cédric Richard and Clovis Francis and Hassan Amoud",\n   title =  "Modèle autorégressif non-linéaire à noyau. Une première approche",\n   booktitle =  "Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Bordeaux, France",\n   year  =  "2011",\n   month =  sep,\n   keywords  =  "machine learning, pre-image problem, adaptive filtering",\n   acronym =  "GRETSI'11",\n   url_paper   =  "http://honeine.fr/paul/publi/11.gretsi.kAR.pdf",\n   abstract = "This communication deals with the problem of analysis and prediction using an autoregressive model. The latter, being known for solving these problems, is designed for linear systems. However, real-life applications are non-linear by nature, therefore, we propose in this paper a nonlinear autoregressive model, using kernel machines. The proposed approach inherits the simplicity of autoregressive model, yet non-linear. By combining the principle of the kernel trick and the resolution of the pre-image problem required for the interpretation of the data, we predict future samples of known chaotic time series present in literature. A comparison with different methods for nonlinear prediction present in the literature illustrates the performance of the proposed nonlinear autoregressive model.",\n   x-abstract_fr = "L'analyse et la prédiction de séries temporelles par un modèle autorégressif ont été largement étudiées pour des systèmes linéaires. Toutefois, ce principe s'avère généralement inadapté pour l'analyse des systèmes non-linéaires. L'objectif de cette communication est de proposer un modèle autorégressif non-linéaire dans un espace de Hilbert à noyau reproduisant. On combine, d'une part le principe du coup du noyau qui permet d'estimer les paramètres du modèle, et d'autre part la résolution d'un problème de pré-image pour obtenir la valeur de la prédiction dans l'espace signal. L'approche proposée hérite de la simplicité algorithmique du modèle autorégressif classique, tout en étant non-linéaire par rapport aux échantillons d'entrée. Une comparaison avec différentes méthodes de prédiction non-linéaires illustre les performances du modèle autorégressif non-linéaire proposé sur des séries temporelles test de la littérature.",\n}\n
\n
\n\n\n
\n This communication deals with the problem of analysis and prediction using an autoregressive model. The latter, being known for solving these problems, is designed for linear systems. However, real-life applications are non-linear by nature, therefore, we propose in this paper a nonlinear autoregressive model, using kernel machines. The proposed approach inherits the simplicity of autoregressive model, yet non-linear. By combining the principle of the kernel trick and the resolution of the pre-image problem required for the interpretation of the data, we predict future samples of known chaotic time series present in literature. A comparison with different methods for nonlinear prediction present in the literature illustrates the performance of the proposed nonlinear autoregressive model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sur le problème de la pré-image en reconnaissance des formes avec contraintes de non-négativité.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Amoud, H.; and Francis, C.\n\n\n \n\n\n\n In Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images, Bordeaux, France, September 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Sur paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.gretsi.preimage,\n   author =  "Maya Kallas and Paul Honeine and Hassan Amoud and Clovis Francis",\n   title =  "Sur le problème de la pré-image en reconnaissance des formes avec contraintes de non-négativité",\n   booktitle =  "Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Bordeaux, France",\n   year  =  "2011",\n   month =  sep,\n   keywords  =  "machine learning, pre-image problem, non-negativity",\n   acronym =  "GRETSI'11",\n   url_paper   =  "http://honeine.fr/paul/publi/11.gretsi.preimage.pdf",\n   abstract = "This paper deals with the pre-image problem in kernel-based machine learning. This mapping back from a feature space into the input space is often crucial in many pattern recognition and denoising tasks. We study a gradient descent solution of the pre-image problem, and show the ease of imposing non-negativity constraints on the result. The relevance of the proposed approach is illustrated for denoising grayscale images.",\n   x-abstract_fr = "Cet article traite le problème de la pré-image en méthodes à noyau pour la reconnaissance des formes. Il s'agit d'un passage obligé pour les applications dont le résultat du traitement doit pouvoir s'exprimer dans le même espace que les observations et non dans un espace fonctionnel difficile à appréhender. C'est le cas par exemple pour le débruitage de données au moyen de l'Analyse en Composantes Principales à noyau. On montre que le problème de la pré-image se prête à une résolution par des méthodes de descente. On profite alors de cette opportunité pour montrer qu'il est possible d'imposer des contraintes à la pré-image, telle que la non-négativité du résultat comme on peut la rencontrer en traitement d'images par exemple. Les algorithmes proposés sont illustrés sur le débruitage non-linéaire d'images et les performances atteintes montrent la pertinence de notre approche",\n}\n
\n
\n\n\n
\n This paper deals with the pre-image problem in kernel-based machine learning. This mapping back from a feature space into the input space is often crucial in many pattern recognition and denoising tasks. We study a gradient descent solution of the pre-image problem, and show the ease of imposing non-negativity constraints on the result. The relevance of the proposed approach is illustrated for denoising grayscale images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Un nouveau paradigme pour le démélange non-linéaire des images hyperspectrales.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; and Honeine, P.\n\n\n \n\n\n\n In Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images, Bordeaux, France, September 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Un paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.gretsi.hype_nonlin,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine",\n   title =  "Un nouveau paradigme pour le démélange non-linéaire des images hyperspectrales",\n   booktitle =  "Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Bordeaux, France",\n   year  =  "2011",\n   month =  sep,\n   keywords  =  "hyperspectral",\n   acronym =  "GRETSI'11",\n   url_paper   =  "http://honeine.fr/paul/publi/11.gretsi.hype_nonlin.pdf",\n   abstract = "In hyperspectral images, pixels are mixtures of spectral components associated to pure materials, called endmembers. Recently, to overcome the limitations of linear models, nonlinear unmixing techniques have been proposed in the literature. In this paper, nonlinear hyperspectral unmixing problem is studied through kernel-based learning theory. Endmember components at each spectral band are mapped implicitly into a high-dimensional feature space, in order to address nonlinear interactions of photons. Experiment results with both synthetic and real images illustrate the effectiveness of the proposed scheme",\n   x-abstract_fr = "En imagerie hyperspectrale, dans un contexte supervisé, chaque vecteur-pixel résulte d'un mélange de spectres de composants purs dont on voudrait estimer les proportions. Récemment, afin de résoudre ce problème en palliant les limitations des modèles linéaires, des méthodes de démélange non-linéaires des données hyperspectrales ont été proposées dans la littérature. Ce problème est ici considéré dans le cadre méthodologique offert par les espaces de Hilbert à noyau reproduisant. L'image de chaque bande spectrale est implicitement calculée dans un tel espace afin de traduire la complexité des phénomènes physiques mis en jeu, puis un algorithme d'inversion adapté à l'estimation des proportions dans l'espace direct appliqué. Des résultats sur des données synthétiques et réelles viennent illustrer l'efficacité de l'approche.",\n}\n
\n
\n\n\n
\n In hyperspectral images, pixels are mixtures of spectral components associated to pure materials, called endmembers. Recently, to overcome the limitations of linear models, nonlinear unmixing techniques have been proposed in the literature. In this paper, nonlinear hyperspectral unmixing problem is studied through kernel-based learning theory. Endmember components at each spectral band are mapped implicitly into a high-dimensional feature space, in order to address nonlinear interactions of photons. Experiment results with both synthetic and real images illustrate the effectiveness of the proposed scheme\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Filtrage adaptatif avec contrainte de non-négativité. Principes de l'algorithme NN-LMS et modèle de convergence.\n \n \n \n \n\n\n \n Richard, C.; Chen, J.; Honeine, P.; and Bermudez, J. C. M.\n\n\n \n\n\n\n In Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images, Bordeaux, France, September 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Filtrage paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.gretsi.filtrage,\n   author =  "Cédric Richard and Jie Chen and Paul Honeine and José C. M. Bermudez",\n   title =  "Filtrage adaptatif avec contrainte de non-négativité. Principes de l'algorithme NN-LMS et modèle de convergence",\n   booktitle =  "Actes du 23-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Bordeaux, France",\n   year  =  "2011",\n   month =  sep,\n   keywords  =  "non-negativity, adaptive filtering",\n   acronym =  "GRETSI'11",\n   url_paper   =  "http://honeine.fr/paul/publi/11.gretsi.filtrage.pdf",\n   abstract = "– Dynamic system modeling plays a crucial role in the development of techniques for stationary and non-stationary signal processing. Due to the inherent physical characteristics of systems under investigation, non-negativity is a desired constraint that can usually be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under non-negativity constraints. We derive the so-called “non-negative least-mean-square algorithm” based on stochastic gradient descent, and we analyze its convergence. Experiments are conducted to illustrate the performance of this approach and consistency with the analysis",\n   x-abstract_fr = "Cet article présente une méthode d'identification de systèmes linéaires sous contraintes de non-négativité sur les coefficients estimés. En effet, en raison de caractéristiques physiques inhérentes à certains systèmes étudiés, la non-négativité est une information a priori parfois naturelle qu'il convient d'exploiter afin de se prémunir contre d'éventuels résultats non interprétables. A la différence des techniques classiques de gradient projeté, l'algorithme `non-negative LMS' proposé opère à la façon d'une méthode de points intérieurs. Par ses performances et son coût calculatoire réduit, l'algorithme présente des caractéristiques comparables à l'algorithme LMS tout en garantissant la non-négativité des coefficients. Le modèle de convergence étudié reproduit très fidèlement les résultats de simulation.",\n}\n
\n
\n\n\n
\n – Dynamic system modeling plays a crucial role in the development of techniques for stationary and non-stationary signal processing. Due to the inherent physical characteristics of systems under investigation, non-negativity is a desired constraint that can usually be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under non-negativity constraints. We derive the so-called “non-negative least-mean-square algorithm” based on stochastic gradient descent, and we analyze its convergence. Experiments are conducted to illustrate the performance of this approach and consistency with the analysis\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-Negative Pre-Image in Machine Learning for Pattern Recognition.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Richard, C.; Francis, C.; and Amoud, H.\n\n\n \n\n\n\n In Proc. 19th European Conference on Signal Processing (EUSIPCO), pages 931-935, Barcelona, Spain, 29 Aug. - 2 September 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Non-Negative link\n  \n \n \n \"Non-Negative paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.eusipco.preimage,\n   author =  "Maya Kallas and Paul Honeine and Cédric Richard and Clovis Francis and Hassan Amoud",\n   title =  "Non-Negative Pre-Image in Machine Learning for Pattern Recognition",\n   booktitle =  "Proc. 19th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Barcelona, Spain",\n   year  =  "2011",\n   month =  "29 Aug. - 2~" # sep,\n   pages={931-935},\n   acronym =  "EUSIPCO",\n   url_link= "https://ieeexplore.ieee.org/document/7074259",\n   url_paper   =  "http://honeine.fr/paul/publi/11.eusipco.preimage.pdf",\n   abstract={Moreover, in order to have a physical interpretation, some constraints should be incorporated in the signal or image processing technique, such as the non-negativity of the solution. This paper deals with the non-negative pre-image problem in kernel machines, for nonlinear pattern recognition. While kernel machines operate in a feature space, associated to the used kernel function, a pre-image technique is often required to map back features into the input space. We derive a gradient-based algorithm to solve the pre-image problem, and to guarantee the non-negativity of the solution. Its convergence speed is significantly improved due to a weighted stepsize approach. The relevance of the proposed method is demonstrated with experiments on real datasets, where only a couple of iterations are necessary.}, \n   keywords={machine learning, pre-image problem, non-negativity, convergence, gradient methods, image recognition, learning (artificial intelligence), machine learning, physical interpretation, signal processing technique, image processing technique, kernel machines, nonnegative pre-image problem, feature space, nonlinear pattern recognition, gradient-based algorithm, convergence speed, weighted stepsize approach, real datasets, Kernel, Pattern recognition, Principal component analysis, Noise reduction, Linear programming, Signal processing, Optimization}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n Moreover, in order to have a physical interpretation, some constraints should be incorporated in the signal or image processing technique, such as the non-negativity of the solution. This paper deals with the non-negative pre-image problem in kernel machines, for nonlinear pattern recognition. While kernel machines operate in a feature space, associated to the used kernel function, a pre-image technique is often required to map back features into the input space. We derive a gradient-based algorithm to solve the pre-image problem, and to guarantee the non-negativity of the solution. Its convergence speed is significantly improved due to a weighted stepsize approach. The relevance of the proposed method is demonstrated with experiments on real datasets, where only a couple of iterations are necessary.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online system identification under non-negativity and $\\ell_1$-norm constraints algorithm and weight behavior analysis.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Lantéri, H.; Theys, C.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 19th European Conference on Signal Processing (EUSIPCO), pages 1919-1923, Barcelona, Spain, 29 Aug. - 2 September 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Online link\n  \n \n \n \"Online paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.eusipco.ident,\n   author =  "Jie Chen and Cédric Richard and Henri Lantéri and Céline Theys and Paul Honeine",\n   title =  "Online system identification under non-negativity and $\\ell_1$-norm constraints algorithm and weight behavior analysis",\n   booktitle =  "Proc. 19th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Barcelona, Spain",\n   year  =  "2011",\n   month =  "29 Aug. - 2~" # sep,\n   acronym =  "EUSIPCO",\n   pages={1919-1923}, \n   url_link= "https://ieeexplore.ieee.org/document/7074164",\n   url_paper   =  "http://honeine.fr/paul/publi/11.eusipco.ident.pdf",\n   abstract={Information processing with L1-norm constraint has been a topic of considerable interest during the last five years since it produces sparse solutions. Non-negativity constraints are also desired properties that can usually be imposed due to inherent physical characteristics of real-life phenomena. In this paper, we investigate an online method for system identification subject to these two families of constraints. Our approach differs from existing techniques such as projected-gradient algorithms in that it does not require any extra projection onto the feasible region. The mean weight-error behavior is analyzed analytically. Experimental results show the advantage of our approach over some existing algorithms. Finally, an application to hyperspectral data processing is considered.}, \n   keywords={non-negativity, adaptive filtering, sparsity, gradient methods, identification, online system identification, L1-norm constraints algorithm, information processing, nonnegativity constraints, mean weight-error behavior analysis, hyperspectral data processing, Vectors, Algorithm design and analysis, Equations, Hyperspectral imaging, Mathematical model, Convergence, Cost function}, \n   ISSN={2076-1465}, \n}\n
\n
\n\n\n
\n Information processing with L1-norm constraint has been a topic of considerable interest during the last five years since it produces sparse solutions. Non-negativity constraints are also desired properties that can usually be imposed due to inherent physical characteristics of real-life phenomena. In this paper, we investigate an online method for system identification subject to these two families of constraints. Our approach differs from existing techniques such as projected-gradient algorithms in that it does not require any extra projection onto the feasible region. The mean weight-error behavior is analyzed analytically. Experimental results show the advantage of our approach over some existing algorithms. Finally, an application to hyperspectral data processing is considered.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel-Based Autoregressive Modeling with a Pre-Image Technique.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Richard, C.; Francis, C.; and Amoud, H.\n\n\n \n\n\n\n In Proc. IEEE workshop on Statistical Signal Processing (SSP), pages 281 - 284, Nice, France, 28 - 30 June 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Kernel-Based link\n  \n \n \n \"Kernel-Based paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.ssp.kar,\n   author =  "Maya Kallas and Paul Honeine and Cédric Richard and Clovis Francis and Hassan Amoud",\n   title =  "Kernel-Based Autoregressive Modeling with a Pre-Image Technique",\n   booktitle =  "Proc. IEEE workshop on Statistical Signal Processing (SSP)",\n   address =  "Nice, France",\n   year  =  "2011",\n   month =  "28 - 30~" # jun,\n   pages={281 - 284}, \n   doi={10.1109/SSP.2011.5967681}, \n   ISSN={2373-0803},\n   acronym =  "SSP",\n   url_link= "https://ieeexplore.ieee.org/document/5967681",\n   url_paper   =  "http://honeine.fr/paul/publi/11.ssp.kar.pdf",\n   abstract={Autoregressive (AR) modeling is a very popular method for time series analysis. Being linear by nature, it obviously fails to adequately describe nonlinear systems. In this paper, we propose a kernel-based AR modeling, by combining two main concepts in kernel machines. One the one hand, we map samples to some nonlinear feature space, where an AR model is considered. We show that the model parameters can be determined without the need to exhibit the nonlinear map, by computing inner products thanks to the kernel trick. On the other hand, we propose a prediction scheme, where the prediction in the feature space is mapped back into the input space, the original samples space. For this purpose, a pre-image technique is derived to predict the future back in the input space. The efficiency of the proposed method is illustrated on real-life time-series, by comparing it to other linear and nonlinear time series prediction techniques.}, \n   keywords={machine learning, pre-image problem, adaptive filtering, autoregressive processes, time series, kernel-based autoregressive modeling, pre-image technique, time series analysis, nonlinear system, kernel-based AR modeling, kernel machines, nonlinear map, kernel trick, feature space, nonlinear time series prediction, Kernel, Time series analysis, Predictive models, Machine learning, Support vector machines, Mathematical model, Kalman filters, pre-image, kernel machine, autoregressive modeling, pattern recognition, prediction}, \n}\n
\n
\n\n\n
\n Autoregressive (AR) modeling is a very popular method for time series analysis. Being linear by nature, it obviously fails to adequately describe nonlinear systems. In this paper, we propose a kernel-based AR modeling, by combining two main concepts in kernel machines. One the one hand, we map samples to some nonlinear feature space, where an AR model is considered. We show that the model parameters can be determined without the need to exhibit the nonlinear map, by computing inner products thanks to the kernel trick. On the other hand, we propose a prediction scheme, where the prediction in the feature space is mapped back into the input space, the original samples space. For this purpose, a pre-image technique is derived to predict the future back in the input space. The efficiency of the proposed method is illustrated on real-life time-series, by comparing it to other linear and nonlinear time series prediction techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Class Least Squares Classification at Binary-Classification Complexity.\n \n \n \n \n\n\n \n Noumir, Z.; Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. IEEE workshop on Statistical Signal Processing (SSP), pages 277 - 280, Nice, France, 28 - 30 June 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-Class link\n  \n \n \n \"Multi-Class paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.ssp.multiclass,\n   author =  "Zineb Noumir and Paul Honeine and Cédric Richard",\n   title =  "Multi-Class Least Squares Classification at Binary-Classification Complexity",\n   booktitle =  "Proc. IEEE workshop on Statistical Signal Processing (SSP)",\n   address =  "Nice, France",\n   year  =  "2011",\n   month =  "28 - 30~" # jun,\n   pages={277 - 280},\n   doi={10.1109/SSP.2011.5967680},\n   acronym =  "SSP",\n   url_link= "https://ieeexplore.ieee.org/document/5967680",\n   url_paper   =  "http://honeine.fr/paul/publi/11.ssp.multiclass.pdf",\n   abstract={This paper deals with multi-class classification problems. Many methods extend binary classifiers to operate a multi-class task, with strategies such as the one-vs-one and the one-vs-all schemes. However, the computational cost of such techniques is highly dependent on the number of available classes. We present a method for multi-class classification, with a computational complexity essentially independent of the number of classes. To this end, we exploit recent developments in multifunctional optimization in machine learning. We show that in the proposed algorithm, labels only appear in terms of inner products, in the same way as input data emerge as inner products in kernel machines via the so-called the kernel trick. Experimental results on real data show that the proposed method reduces efficiently the computational time of the classification task without sacrificing its generalization ability.}, \n   keywords={computational complexity, learning (artificial intelligence), least mean squares methods, optimisation, pattern classification, multiclass least squares classification, binary-classification complexity, computational complexity, multifunctional optimization, machine learning, kernel machine, kernel trick, Kernel, Optimization, Machine learning, Hyperspectral imaging, Training data, Complexity theory}, \n   ISSN={2373-0803}, \n}\n
\n
\n\n\n
\n This paper deals with multi-class classification problems. Many methods extend binary classifiers to operate a multi-class task, with strategies such as the one-vs-one and the one-vs-all schemes. However, the computational cost of such techniques is highly dependent on the number of available classes. We present a method for multi-class classification, with a computational complexity essentially independent of the number of classes. To this end, we exploit recent developments in multifunctional optimization in machine learning. We show that in the proposed algorithm, labels only appear in terms of inner products, in the same way as input data emerge as inner products in kernel machines via the so-called the kernel trick. Experimental results on real data show that the proposed method reduces efficiently the computational time of the classification task without sacrificing its generalization ability.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Gradient Based Method for Fully Constrained Least-Squares Unmixing of Hyperspectral Images.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Lantéri, H.; Theys, C.; and Honeine, P.\n\n\n \n\n\n\n In Proc. IEEE workshop on Statistical Signal Processing (SSP), pages 301-304, Nice, France, 28 - 30 June 2011. \n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.ssp.hype,\n   author =  "Jie Chen and Cédric Richard and  Henri Lantéri and Céline Theys and Paul Honeine",\n   title =  "A Gradient Based Method for Fully Constrained Least-Squares Unmixing of Hyperspectral Images",\n   booktitle =  "Proc. IEEE workshop on Statistical Signal Processing (SSP)",\n   address =  "Nice, France",\n   year  =  "2011",\n   month =  "28 - 30~" # jun,\n   pages={301-304}, \n   doi={10.1109/SSP.2011.5967687},\n   acronym =  "SSP",\n   url_link= "https://ieeexplore.ieee.org/document/5967687",\n   url_paper   =  "http://honeine.fr/paul/publi/11.ssp.hype.pdf",\n   abstract={Linear unmixing of hyperspectral images is a popular approach to determine and quantify materials in sensed images. The linear unmixing problem is challenging because the abundances of materials to estimate have to satisfy non-negativity and full-additivity constraints. In this paper, we investigate an iterative algorithm that integrates these two requirements into the coefficient update process. The constraints are satisfied at each iteration without using any extra operations such as projections. Moreover, the mean transient behavior of the weights is analyzed analytically, which has never been seen for other algorithms in hyperspectral image unmixing. Simulation results illustrate the effectiveness of the proposed algorithm and the accuracy of the model.}, \n   keywords={non-negativity, sparsity, hyperspectral, geophysical image processing, gradient methods, least squares approximations, remote sensing, gradient based method, constrained least-squares unmixing, hyperspectral images, sensed images, linear unmixing problem, non-negativity constraints, full-additivity constraints, iterative algorithm, coefficient update process, mean transient behavior, hyperspectral image unmixing, Hyperspectral imaging, Materials, Mathematical model, Signal processing algorithms, Equations, Pixel, Hyperspectral imagery, linear unmixing, estimation under constraints}, \n   ISSN={2373-0803}, \n}\n
\n
\n\n\n
\n Linear unmixing of hyperspectral images is a popular approach to determine and quantify materials in sensed images. The linear unmixing problem is challenging because the abundances of materials to estimate have to satisfy non-negativity and full-additivity constraints. In this paper, we investigate an iterative algorithm that integrates these two requirements into the coefficient update process. The constraints are satisfied at each iteration without using any extra operations such as projections. Moreover, the mean transient behavior of the weights is analyzed analytically, which has never been seen for other algorithms in hyperspectral image unmixing. Simulation results illustrate the effectiveness of the proposed algorithm and the accuracy of the model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Wireless sensor networks in biomedical: body area networks.\n \n \n \n \n\n\n \n Honeine, P.; Mourad-Chehade, F.; Kallas, M.; Snoussi, H.; Amoud, H.; and Francis, C.\n\n\n \n\n\n\n In Proc. 7th International Workshop on Systems, Signal Processing and their Applications (WOSSPA), pages 388-391, Algeria, 09 - 11 May 2011. \n \n\n\n\n
\n\n\n\n \n \n \"Wireless link\n  \n \n \n \"Wireless paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.wosspa,\n   author =  "Paul Honeine and Farah Mourad-Chehade and Maya Kallas and Hichem Snoussi and Hassan Amoud and Clovis Francis",\n   title =  "Wireless sensor networks in biomedical: body area networks",\n   booktitle =  "Proc. 7th International Workshop on Systems, Signal Processing and their Applications (WOSSPA)",\n   address =  "Algeria",\n   year  =  "2011",\n   month =  "09 - 11~" # may,\n   pages={388-391},\n   doi={10.1109/WOSSPA.2011.5931518},\n   acronym =  "WoSSPA",\n   url_link= "https://ieeexplore.ieee.org/document/5931518",\n   url_paper   =  "http://honeine.fr/paul/publi/11.wosspa.pdf",\n   abstract={The rapid growth in biomedical sensors, low-power circuits and wireless communications has enabled a new generation of wireless sensor networks: the body area networks. These networks are composed of tiny, cheap and low-power biomedical nodes, mainly dedicated for healthcare monitoring applications. The objective of these applications is to ensure a continuous monitoring of vital parameters of patients, while giving them the freedom of motion and thereby better quality of healthcare. This paper shows a comparison of body area networks to the wireless sensor networks. In particular, it shows how body area networks borrow and enhance ideas from wireless sensor networks. A study of energy consumption and heat absorption problems is developed for illustration.}, \n   keywords={wireless sensor networks, body area networks, energy consumption, health care, low-power electronics, wireless sensor networks, wireless sensor networks, body area networks, biomedical sensors, low-power circuits, wireless communications, low-power biomedical nodes, healthcare monitoring applications, energy consumption, heat absorption problems, Wireless sensor networks, Routing, Body area networks, Biosensors, Monitoring, Wireless communication, Absorption}, \n}\n
\n
\n\n\n
\n The rapid growth in biomedical sensors, low-power circuits and wireless communications has enabled a new generation of wireless sensor networks: the body area networks. These networks are composed of tiny, cheap and low-power biomedical nodes, mainly dedicated for healthcare monitoring applications. The objective of these applications is to ensure a continuous monitoring of vital parameters of patients, while giving them the freedom of motion and thereby better quality of healthcare. This paper shows a comparison of body area networks to the wireless sensor networks. In particular, it shows how body area networks borrow and enhance ideas from wireless sensor networks. A study of energy consumption and heat absorption problems is developed for illustration.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n MALICE : Localisation de sources polluantes depuis un réseau de capteurs.\n \n \n \n \n\n\n \n Septier, F.; Delignon, Y.; Armand, P.; Snoussi, H.; and Honeine, P.\n\n\n \n\n\n\n In 4-ème Workshop du Groupement d'Intérêt Scientifique : Surveillance, Sûreté, Sécurité des Grands Systèmes (GIS-3SGS'11), pages 1, Valenciennes, France, 12 - 13 October 2011. \n \n\n\n\n
\n\n\n\n \n \n \"MALICE paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.gis,\n   author =  "Francois Septier and Yves Delignon and Patrick Armand and Hichem Snoussi and Paul Honeine",\n   title =  "MALICE : Localisation de sources polluantes depuis un réseau de capteurs",\n   booktitle =  "4-ème Workshop du Groupement d'Intérêt Scientifique : Surveillance, Sûreté, Sécurité des Grands Systèmes (GIS-3SGS'11)",\n   address =  "Valenciennes, France",\n   year =  "2011",\n   month =  "12 - 13~" # oct,\n   pages = "1",\n   acronym =  "GIS",\n   keywords =  "wireless sensor networks",\n   url_paper  =  "http://www.honeine.fr/paul/publi/11.gis.pdf",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Le problème de pré-image dans la reconnaissance des formes.\n \n \n \n\n\n \n Khodor, N.; Amoud, H.; Kallas, M.; Honeine, P.; and Francis, C.\n\n\n \n\n\n\n In Proc. 1st International Conference on Advances in Biomedical Engineering, pages 1 - 2, Tripoli, Lebanon, 6 - 8 July 2011. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.edst,\n   author =  "Nadine Khodor and Hassan Amoud and Maya Kallas and Paul Honeine and Clovis Francis",\n   title =  "Le problème de pré-image dans la reconnaissance des formes",\n   booktitle =  "Proc. 1st International Conference on Advances in Biomedical Engineering",\n   address =  "Tripoli, Lebanon",\n   year  =  "2011",\n   month =  "6 - 8~" # jul,\n   pages = "1 - 2",\n   keywords  =  "machine learning, pre-image problem",\n   acronym =  "EDST",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Problème de pré-image en apprentissage et reconnaissance des formes. Applications en traitement du signal et des images.\n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n In Journée apprentissage et reconnaissance des formes en signal et images, journées thématiques au GdR ISIS, 7 April 2011. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.GdR,\n   author =  "Paul Honeine",\n   title =  "Problème de pré-image en apprentissage et reconnaissance des formes. Applications en traitement du signal et des images",\n   booktitle =  "Journée apprentissage et reconnaissance des formes en signal et images, journées thématiques au GdR ISIS",\n   year  =  "2011",\n   month =  "7~" # apr,\n   keywords  =  "machine learning, pre-image problem",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n VigiRes'Eau : Surveillance en temps réel de la qualité de l'eau potable d'un réseau de distribution en vue de la détection d'intrusions.\n \n \n \n\n\n \n Fillatre, L.; Honeine, P.; Nikiforov, I.; Richard, C.; Snoussi, H.; Azzaoui, N.; Guépié, B. K.; Noumir, Z.; Deveughèle, S.; and Yin, H.\n\n\n \n\n\n\n In Workshop Interdisciplinaire sur la Sécurité Globale (WISG'11), (ANR - CSOSG), pages 1-7, Troyes, France, 2011. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{11.wisg.vigireseau,\n   author =  "Lionel Fillatre and Paul Honeine and Igor Nikiforov and Cédric Richard and Hichem Snoussi and Nourddine Azzaoui and Blaise Kévin Guépié and Zineb Noumir and Stéphane Deveughèle and Huan Yin",\n   title =  "VigiRes'Eau : Surveillance en temps réel de la qualité de l'eau potable d'un réseau de distribution en vue de la détection d'intrusions",\n   booktitle =  "Workshop Interdisciplinaire sur la Sécurité Globale (WISG'11), (ANR - CSOSG)",\n   address =  "Troyes, France",\n   year  =  "2011",\n   pages =  "1-7",\n   acronym =  "WISG",\n   keywords  = "non-stationarity, adaptive filtering, machine learning, one-class, cybersecurity",\n}\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2010\n \n \n (15)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Nonstationary signal analysis with time-frequency kernel machines.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; and Flandrin, P.\n\n\n \n\n\n\n In Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods and Techniques, In Eds E. Soria, J.D. Martín, R. Magdalena, M. Martínez, and A.J. Serrano, of Information Science Reference, 10, pages 223 - 241. IGI Global, 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Nonstationary link\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@incollection{Hon10.ml,\n   author =  "Paul Honeine and Cédric Richard and Patrick Flandrin",\n   title =  "Nonstationary signal analysis with time-frequency kernel machines",\n   chapter =  "10",\n   pages =  "223 - 241",\n   booktitle =  "Handbook of Research on Machine Learning Applications and Trends: Algorithms, Methods and Techniques, \n   In Eds E. Soria, J.D. Martín, R. Magdalena, M. Martínez, and A.J. Serrano",\n   series =  "Information Science Reference",\n   publisher =  "IGI Global",\n   year  =  "2010",\n   url_link = "http://www.igi-global.com/reference/details.asp?ID=34664",\n   keywords  =  {non-stationarity, machine learning, time-frequency analysis, machine learning, non-stationary signals, kernel methods},\n}\n\n\n\n%: Journals\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Testing Stationarity with Surrogates: A Time-Frequency Approach.\n \n \n \n \n\n\n \n Borgnat, P.; Flandrin, P.; Honeine, P.; Richard, C.; and Xiao, J.\n\n\n \n\n\n\n IEEE Transactions on Signal Processing, 58(7): 3459 - 3470. July 2010.\n \n\n\n\n
\n\n\n\n \n \n \"Testing link\n  \n \n \n \"Testing paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{Bor10.tsp,\n   author =  "Pierre Borgnat and Patrick Flandrin and Paul Honeine and Cédric Richard and Jun Xiao",\n   title =  "Testing Stationarity with Surrogates: A Time-Frequency Approach",\n   journal =  "IEEE Transactions on Signal Processing",\n   volume =  "58",\n   number =  "7",\n   pages =  "3459 - 3470",\n   month =  jul,\n   year  =  "2010",\n   url_link= "https://ieeexplore.ieee.org/document/5419113",\n   doi="10.1109/TSP.2010.2043971",\n   url_paper  = "http://www.honeine.fr/paul/publi/10.tsp.surrogates.pdf",\n   keywords =  "non-stationarity, one-class, Stationarity test, Time-frequency, Support vector machines, One-class classification, Surrogates, feature extraction, signal classification, statistical analysis, time-frequency analysis, testing stationarity, time-frequency approach, stochastic context, deterministic context, one-class classifier, features extraction, Testing, Time frequency analysis, Stochastic processes, Signal processing, Feature extraction, Support vector machines, Support vector machine classification, Signal processing algorithms, Steady-state, Electric breakdown, One-class classification, stationarity test, support vector machines, time-frequency analysis",\n   abstract={An operational framework is developed for testing stationarity relatively to an observation scale, in both stochastic and deterministic contexts. The proposed method is based on a comparison between global and local time-frequency features. The originality is to make use of a family of stationary surrogates for defining the null hypothesis of stationarity and to base on them two different statistical tests. The first one makes use of suitably chosen distances between local and global spectra, whereas the second one is implemented as a one-class classifier, the time- frequency features extracted from the surrogates being interpreted as a learning set for stationarity. The principle of the method and of its two variations is presented, and some results are shown on typical models of signals that can be thought of as stationary or nonstationary, depending on the observation scale used.}, \n}\n
\n
\n\n\n
\n An operational framework is developed for testing stationarity relatively to an observation scale, in both stochastic and deterministic contexts. The proposed method is based on a comparison between global and local time-frequency features. The originality is to make use of a family of stationary surrogates for defining the null hypothesis of stationarity and to base on them two different statistical tests. The first one makes use of suitably chosen distances between local and global spectra, whereas the second one is implemented as a one-class classifier, the time- frequency features extracted from the surrogates being interpreted as a learning set for stationarity. The principle of the method and of its two variations is presented, and some results are shown on typical models of signals that can be thought of as stationary or nonstationary, depending on the observation scale used.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A decentralized approach for non-linear prediction of time series data in sensor networks.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; Snoussi, H.; Bermudez, J. C. M.; and Chen, J.\n\n\n \n\n\n\n Journal on Wireless Communications and Networking, Special issue on theoretical and algorithmic foundations of wireless ad hoc and sensor networks: 12:1 - 12:12. January 2010.\n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{Hon10.eur,\n   author =  "Paul Honeine and Cédric Richard and Hichem Snoussi and José C. M. Bermudez and Jie Chen",\n   title =  "A decentralized approach for non-linear prediction of time series data in sensor networks",\n   journal =  "Journal on Wireless Communications and Networking",\n   year  =  "2010",\n   volume = {Special issue on theoretical and algorithmic foundations of wireless ad hoc and sensor networks},\n   month = jan,\n   pages =  "12:1 - 12:12",\n   publisher =  "EURASIP",\n   address = {Hindawi Publishing Corp., New York, NY, United States},\n   doi = "10.1155/2010/627372",\n   url_link= "https://jwcn-eurasipjournals.springeropen.com/articles/10.1155/2010/627372",\n   url_paper  = "http://www.honeine.fr/paul/publi/10.wcn.ph_cr_hs_jcb_jc.pdf",\n   keywords  =  "wireless sensor networks, adaptive filtering, sparsity, time series, information System, pattern Recognition, temperature distribution, sensor network",\n   abstract = "Wireless sensor networks rely on sensor devices deployed in an environment to support sensing and monitoring, including temperature, humidity, motion, and acoustic. Here, we propose a new approach to model physical phenomena and track their evolution by taking advantage of the recent developments of pattern recognition for nonlinear functional learning. These methods are, however, not suitable for distributed learning in sensor networks as the order of models scales linearly with the number of deployed sensors and measurements. In order to circumvent this drawback, we propose to design reduced order models by using an easy to compute sparsification criterion. We also propose a kernel-based least-mean-square algorithm for updating the model parameters using data collected by each sensor. The relevance of our approach is illustrated by two applications that consist of estimating a temperature distribution and tracking its evolution over time.",\n}\n
\n
\n\n\n
\n Wireless sensor networks rely on sensor devices deployed in an environment to support sensing and monitoring, including temperature, humidity, motion, and acoustic. Here, we propose a new approach to model physical phenomena and track their evolution by taking advantage of the recent developments of pattern recognition for nonlinear functional learning. These methods are, however, not suitable for distributed learning in sensor networks as the order of models scales linearly with the number of deployed sensors and measurements. In order to circumvent this drawback, we propose to design reduced order models by using an easy to compute sparsification criterion. We also propose a kernel-based least-mean-square algorithm for updating the model parameters using data collected by each sensor. The relevance of our approach is illustrated by two applications that consist of estimating a temperature distribution and tracking its evolution over time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-negative Distributed Regression for Data Inference in Wireless Sensor Networks.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Honeine, P.; and Bermudez, J. C. M.\n\n\n \n\n\n\n In Proc. 44th Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pages 451-455, Pacific Grove (CA), USA, 7 - 10 November 2010. \n \n\n\n\n
\n\n\n\n \n \n \"Non-negative link\n  \n \n \n \"Non-negative paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.wsn.nn,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine and José C. M. Bermudez",\n   title =  "Non-negative Distributed Regression for Data Inference in Wireless Sensor Networks",\n   booktitle =  "Proc. 44th Asilomar Conference on Signals, Systems and Computers (ASILOMAR)",\n   address =  "Pacific Grove (CA), USA",\n   year  =  "2010",\n   month =  "7 - 10~" # nov,\n   pages={451-455}, \n   doi={10.1109/ACSSC.2010.5757599}, \n   ISSN={1058-6393},\n   url_link= "https://ieeexplore.ieee.org/document/5757599",\n   url_paper   =  "http://honeine.fr/paul/publi/10.wsn.nn.pdf",\n   acronym =  "Asilomar",\n   abstract={Wireless sensor networks are designed to perform on inference the environment that they are sensing. Due to the inherent physical characteristics of systems under investigation, non-negativity is a desired constraint that must be imposed on the system parameters in some real-life phenomena sensing tasks. In this paper, we propose a kernel-based machine learning strategy to deal with regression problems. Multiplicative update rules are derived in this context to ensure the non-negativity constraints to be satisfied. Considering the tight energy and bandwidth resource, a distributed algorithm which requires only communication between neighbors is presented. Synthetic data managed by heat diffusion equations are used to test the algorithms and illustrate their tracking capacity.}, \n   keywords={non-negativity, sparsity, wireless sensor networks, learning (artificial intelligence), operating system kernels, regression analysis, wireless sensor networks, non-negative distributed regression, data inference, wireless sensor networks, kernel-based machine learning strategy, distributed algorithm, synthetic data, heat diffusion equations, Kernel, Wireless sensor networks, Signal processing algorithms, Convergence, Inference algorithms, Cost function, Heating}, \n}\n
\n
\n\n\n
\n Wireless sensor networks are designed to perform on inference the environment that they are sensing. Due to the inherent physical characteristics of systems under investigation, non-negativity is a desired constraint that must be imposed on the system parameters in some real-life phenomena sensing tasks. In this paper, we propose a kernel-based machine learning strategy to deal with regression problems. Multiplicative update rules are derived in this context to ensure the non-negativity constraints to be satisfied. Considering the tight energy and bandwidth resource, a distributed algorithm which requires only communication between neighbors is presented. Synthetic data managed by heat diffusion equations are used to test the algorithms and illustrate their tracking capacity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear feature extraction using kernel principal component analysis with non-negative pre-image.\n \n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Richard, C.; Amoud, H.; and Francis, C.\n\n\n \n\n\n\n In Proc. 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 3642-3645, Buenos Aires, Argentina, 31 Aug. - 4 September 2010. \n \n\n\n\n
\n\n\n\n \n \n \"Nonlinear link\n  \n \n \n \"Nonlinear paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.embc.kpca,\n   author =  "Maya Kallas and Paul Honeine and Cédric Richard and Hassan Amoud and Clovis Francis",\n   title =  "Nonlinear feature extraction using kernel principal component analysis with non-negative pre-image",\n   booktitle =  "Proc. 32nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society",\n   address =  "Buenos Aires, Argentina",\n   year  =  "2010",\n   month =  "31 Aug. - 4~" # sep,\n   pages={3642-3645}, \n   doi={10.1109/IEMBS.2010.5627421}, \n   ISSN={1557-170X},\n   url_link= "https://ieeexplore.ieee.org/document/5627421",\n   url_paper   =  "http://honeine.fr/paul/publi/10.embc.kpca",\n   acronym =  "EMBC",\n   abstract={The inherent physical characteristics of many real-life phenomena, including biological and physiological aspects, require adapted nonlinear tools. Moreover, the additive nature in some situations involve solutions expressed as positive combinations of data. In this paper, we propose a nonlinear feature extraction method, with a non-negativity constraint. To this end, the kernel principal component analysis is considered to define the most relevant features in the reproducing kernel Hilbert space. These features are the nonlinear principal components with high-order correlations between input variables. A pre-image technique is required to get back to the input space. With a non-negative constraint, we show that one can solve the pre-image problem efficiently, using a simple iterative scheme. Furthermore, the constrained solution contributes to the stability of the algorithm. Experimental results on event-related potentials (ERP) illustrate the efficiency of the proposed method.}, \n   keywords={machine learning, pre-image problem, non-negativity, electroencephalography, feature extraction, Hilbert spaces, iterative methods, medical signal processing, principal component analysis, nonlinear feature extraction, kernel principal component analysis, kernel Hilbert space, pre-image technique, iterative scheme, event-related potentials, ERP, EEG, Feature extraction, Kernel, Principal component analysis, Electroencephalography, Optimization, Brain models, Kernel-PCA, pre-image problem, non-negativity, constraint, additive weight algorithm, Brain, Brain Mapping, Diagnosis, Computer-Assisted, Electroencephalography, Evoked Potentials, Humans, Nonlinear Dynamics, Pattern Recognition, Automated, Principal Component Analysis, Reproducibility of Results, Sensitivity and Specificity}, \n}\n
\n
\n\n\n
\n The inherent physical characteristics of many real-life phenomena, including biological and physiological aspects, require adapted nonlinear tools. Moreover, the additive nature in some situations involve solutions expressed as positive combinations of data. In this paper, we propose a nonlinear feature extraction method, with a non-negativity constraint. To this end, the kernel principal component analysis is considered to define the most relevant features in the reproducing kernel Hilbert space. These features are the nonlinear principal components with high-order correlations between input variables. A pre-image technique is required to get back to the input space. With a non-negative constraint, we show that one can solve the pre-image problem efficiently, using a simple iterative scheme. Furthermore, the constrained solution contributes to the stability of the algorithm. Experimental results on event-related potentials (ERP) illustrate the efficiency of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n System identification under non-negativity constraints.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Honeine, P.; Lantéri, H.; and Theys, C.\n\n\n \n\n\n\n In Proc. 18th European Conference on Signal Processing (EUSIPCO), pages 1728 - 1732, Aalborg, Denmark, 23 - 27 August 2010. \n \n\n\n\n
\n\n\n\n \n \n \"System link\n  \n \n \n \"System paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.eusipco.ident,\n   author =  "Jie Chen and Cédric Richard and Paul Honeine and Henri Lantéri and Céline Theys",\n   title =  "System identification under non-negativity constraints",\n   booktitle =  "Proc. 18th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Aalborg, Denmark",\n   year  =  "2010",\n   month =  "23 - 27~" # aug,\n   pages =  "1728 - 1732",\n   url_link= "https://ieeexplore.ieee.org/document/7096710",\n   url_paper   =  "http://honeine.fr/paul/publi/10.eusipco.ident.pdf",\n   acronym =  "EUSIPCO",\n   abstract={Dynamic system modeling plays a crucial role in the development of techniques for stationary and non-stationary signal processing. Due to the inherent physical characteristics of systems usually under investigation, non-negativity is a desired constraint that can be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under non-negativity constraints. We derive additive and multiplicative weight update algorithms, based on (stochastic) gradient descent of mean-square error or Kullback-Leibler divergence. Experiments are conducted to validate the proposed approach.}, \n   keywords={non-negativity, adaptive filtering, gradient methods, identification, mean square error methods, signal processing, system identification, nonnegativity constraints, stationary signal processing, nonstationary signal processing, additive weight update algorithm, multiplicative weight update algorithm, mean-square error gradient descent, Kullback-Leibler divergence gradient descent, Mean square error methods, Signal processing algorithms, Additives, Image restoration, Mathematical model, Noise}, \n   ISSN={2219-5491}, \n}\n
\n
\n\n\n
\n Dynamic system modeling plays a crucial role in the development of techniques for stationary and non-stationary signal processing. Due to the inherent physical characteristics of systems usually under investigation, non-negativity is a desired constraint that can be imposed on the parameters to estimate. In this paper, we propose a general method for system identification under non-negativity constraints. We derive additive and multiplicative weight update algorithms, based on (stochastic) gradient descent of mean-square error or Kullback-Leibler divergence. Experiments are conducted to validate the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A simple scheme for unmixing hyperspectral data based on the geometry of the n-dimensional simplex.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pages 2271-2274, Honolulu (Hawaii), USA, 25 - 30 July 2010. \n \n\n\n\n
\n\n\n\n \n \n \"A link\n  \n \n \n \"A paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.simplex,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "A simple scheme for unmixing hyperspectral data based on the geometry of the n-dimensional simplex",\n   booktitle =  "Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS)",\n   address =  "Honolulu (Hawaii), USA",\n   year  =  "2010",\n   month =  "25 - 30~" # jul,\n   pages={2271-2274}, \n   doi={10.1109/IGARSS.2010.5651456}, \n   ISSN={2153-6996},\n   url_link= "https://ieeexplore.ieee.org/document/5651456",\n   url_paper   =  "http://honeine.fr/paul/publi/10.simplex.pdf",\n   acronym =  "IGARSS",\n   abstract={In this paper, we study the problem of decomposing spectra in hyperspectral data into the sum of pure spectra, or endmembers. We propose to jointly extract the endmembers and estimate the corresponding fractions, or abundances. For this purpose, we show that these abundances can be easily computed using volume of simplices, from the same information used in the classical N-Findr algorithm. This results into a simple scheme for unmixing hyperspectral data, with low computational complexity. Experimental results show the efficiency of the proposed method.}, \n   keywords={communication complexity, computational geometry, image processing, unmixing hyperspectral data, N dimensional simplex, N Findr algorithm, computational complexity, image analysis, geometry, Hyperspectral imaging, Pixel, Estimation, Geometry, Materials}, \n}\n
\n
\n\n\n
\n In this paper, we study the problem of decomposing spectra in hyperspectral data into the sum of pure spectra, or endmembers. We propose to jointly extract the endmembers and estimate the corresponding fractions, or abundances. For this purpose, we show that these abundances can be easily computed using volume of simplices, from the same information used in the classical N-Findr algorithm. This results into a simple scheme for unmixing hyperspectral data, with low computational complexity. Experimental results show the efficiency of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed learning with kernels in wireless sensor networks for physical phenomena modeling and tracking.\n \n \n \n \n\n\n \n Richard, C.; Honeine, P.; Snoussi, H.; Ferrari, A.; and Theys, C.\n\n\n \n\n\n\n In Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Honolulu (Hawaii), USA, 25 - 30 July 2010. \n \n\n\n\n
\n\n\n\n \n \n \"Distributed paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.wsn.physics,\n   author =  "Cédric Richard and Paul Honeine and Hichem Snoussi and André Ferrari and Céline Theys",\n   title =  "Distributed learning with kernels in wireless sensor networks for physical phenomena modeling and tracking",\n   booktitle =  "Proc. IEEE International Geoscience and Remote Sensing Symposium (IGARSS)",\n   address =  "Honolulu (Hawaii), USA",\n   year  =  "2010",\n   month =  "25 - 30~" # jul,\n   keywords  =  "machine learning, sparsity, adaptive filtering, wireless sensor networks",\n   url_paper   =  "http://honeine.fr/paul/publi/10.wsn.physics.pdf",\n   acronym =  "IGARSS",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The angular kernel in machine learning for hyperspectral data classification.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS), pages 1-4, Reykjavik, Iceland, 14 - 16 June 2010. \n \n\n\n\n
\n\n\n\n \n \n \"The link\n  \n \n \n \"The paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.angular,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "The angular kernel in machine learning for hyperspectral data classification",\n   booktitle =  "Proc. IEEE Workshop on Hyperspectral Image and Signal Processing : Evolution in Remote Sensing (WHISPERS)",\n   address =  "Reykjavik, Iceland",\n   year  =  "2010",\n   month =  "14 - 16~" # jun,\n   pages={1-4}, \n   doi={10.1109/WHISPERS.2010.5594908},\n   url_link= "https://ieeexplore.ieee.org/document/5594908",\n   url_paper   =  "http://honeine.fr/paul/publi/10.angular.pdf",\n   acronym =  "WHISPERS",\n   abstract={Support vector machines have been investigated with success for hyperspectral data classification. In this paper, we propose a new kernel to measure spectral similarity, called the angular kernel. We provide some of its properties, such as its invariance to illumination energy, as well as connection to previous work. Furthermore, we show that the performance of a classifier associated to the angular kernel is comparable to the Gaussian kernel, in the sense of universality. We derive a class of kernels based on the angular kernel, and study the performance on an urban classification task.}, \n   keywords={data handling, Gaussian processes, geophysical image processing, image classification, learning (artificial intelligence), support vector machines, angular kernel, machine learning, hyperspectral data classification, support vector machines, illumination energy, Gaussian kernel, urban classification task, hyperspectral images, Kernel, Support vector machines, Hyperspectral imaging, Machine learning, Spatial resolution, Hyperspectral data, spectral angle, SVM, reproducing kernel, machine learning}, \n  ISSN={2158-6268}, \n}\n
\n
\n\n\n
\n Support vector machines have been investigated with success for hyperspectral data classification. In this paper, we propose a new kernel to measure spectral similarity, called the angular kernel. We provide some of its properties, such as its invariance to illumination energy, as well as connection to previous work. Furthermore, we show that the performance of a classifier associated to the angular kernel is comparable to the Gaussian kernel, in the sense of universality. We derive a class of kernels based on the angular kernel, and study the performance on an urban classification task.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Techniques d'apprentissage non-linéaires en ligne avec contraintes de positivité.\n \n \n \n \n\n\n \n Chen, J.; Richard, C.; Honeine, P.; Snoussi, H.; Lantéri, H.; and Theys, C.\n\n\n \n\n\n\n In Actes de la VI-ème Conférence Internationale Francophone d'Automatique (CIFA), Nancy, France, 2 - 4 June 2010. \n \n\n\n\n
\n\n\n\n \n \n \"Techniques paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.cifa.positiv,\n   author =  "Chen, Jie and Cédric Richard and Paul Honeine and Hichem Snoussi and Henri Lantéri and Céline Theys",\n   title =  "Techniques d'apprentissage non-linéaires en ligne avec contraintes de positivité",\n   booktitle =  "Actes de la VI-ème Conférence Internationale Francophone d'Automatique (CIFA)",\n   address =  "Nancy, France",\n   year  =  "2010",\n   month =  "2 - 4~" # jun,\n   url_paper   =  "http://honeine.fr/paul/publi/10.cifa.positiv.pdf",\n   acronym =  "CIFA",\n   abstract = "Cet article décrit une nouvelle classe d'algorithmes d'apprentissage non-linéaires en ligne avec contrainte de positivité sur la solution. Ceux-ci sont appliqués au problème d'identification distribuée d'un champ scalaire positif, par exemple de rayonnement thermique ou de concentration d'une espèce chimique, par un réseau de capteurs. La question du suivi de l'évolution de la grandeur physique surveillée au cours du temps est également considérée. Les algorithmes proposés sont testés sur des données synthétiques régies par des équations de diffusion. Ils démontrent une excellente capacité de suivi des évolutions du système, tout en affichant un coût calculatoire réduit.",\n   keywords  =  "machine learning, non-negativity, adaptive filtering, apprentissage, régression, non-linéaire, positivité, adaptatif, réseau de capteurs.",\n}\n
\n
\n\n\n
\n Cet article décrit une nouvelle classe d'algorithmes d'apprentissage non-linéaires en ligne avec contrainte de positivité sur la solution. Ceux-ci sont appliqués au problème d'identification distribuée d'un champ scalaire positif, par exemple de rayonnement thermique ou de concentration d'une espèce chimique, par un réseau de capteurs. La question du suivi de l'évolution de la grandeur physique surveillée au cours du temps est également considérée. Les algorithmes proposés sont testés sur des données synthétiques régies par des équations de diffusion. Ils démontrent une excellente capacité de suivi des évolutions du système, tout en affichant un coût calculatoire réduit.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Statistical hypothesis testing with time-frequency surrogates to check signal stationarity.\n \n \n \n \n\n\n \n Richard, C.; Ferrari, A.; Amoud, H.; Honeine, P.; Flandrin, P.; and Borgnat, P.\n\n\n \n\n\n\n In Proc. 35th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 3666-3669, Dallas, Texas, 14 - 19 March 2010. \n \n\n\n\n
\n\n\n\n \n \n \"Statistical link\n  \n \n \n \"Statistical paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.icassp.surrogates,\n   author =  "Cédric Richard and André Ferrari and Hassan Amoud and Paul Honeine and Patrick Flandrin and Pierre Borgnat",\n   title =  "Statistical hypothesis testing with time-frequency surrogates to check signal stationarity",\n   booktitle =  "Proc. 35th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Dallas, Texas",\n   month =  "14 - 19~" # mar,\n   year  =  "2010",\n   pages={3666-3669}, \n   doi={10.1109/ICASSP.2010.5495887}, \n   ISSN={1520-6149},\n   acronym =  "ICASSP",\n   url_link=  "https://ieeexplore.ieee.org/document/5495887",\n   url_paper   =  "http://honeine.fr/paul/publi/10.icassp.surrogates.pdf",\n   abstract={An operational framework is developed for testing stationarity relatively to an observation scale. The proposed method makes use of a family of stationary surrogates for defining the null hypothesis of stationarity. As a further contribution to the field, we demonstrate the strict-sense stationarity of surrogate signals and we exploit this property to derive the asymptotic distributions of their spectrogram and power spectral density. A statistical hypothesis testing framework is then proposed to check signal stationarity. Finally, some results are shown on a typical model of signals that can be thought of as stationary or nonstationary, depending on the observation scale used.}, \n   keywords={probability, signal processing, time-frequency analysis, statistical hypothesis testing, time-frequency surrogates, signal stationarity, strict-sense stationarity, surrogate signals, spectrogram, power spectral density, Testing, Time frequency analysis, Spectrogram, Probability density function, Tellurium, Signal analysis, Signal processing, Feature extraction, Data mining, Machine learning, Time-frequency analysis, stationarity test, surrogate, spectrogram, probability density function}, \n}\n
\n
\n\n\n
\n An operational framework is developed for testing stationarity relatively to an observation scale. The proposed method makes use of a family of stationary surrogates for defining the null hypothesis of stationarity. As a further contribution to the field, we demonstrate the strict-sense stationarity of surrogate signals and we exploit this property to derive the asymptotic distributions of their spectrogram and power spectral density. A statistical hypothesis testing framework is then proposed to check signal stationarity. Finally, some results are shown on a typical model of signals that can be thought of as stationary or nonstationary, depending on the observation scale used.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Constrained Pattern Recognition with Nonlinear Principal Component Analysis.\n \n \n \n\n\n \n Kallas, M.; Honeine, P.; Amoud, H.; Francis, C.; and Richard, C.\n\n\n \n\n\n\n In Journées Scientifiques à l'Ecole Doctorale de Sciences et Technologie, Liban, 8 - 9 December 2010. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.EDST,\n   author =  "Maya Kallas and Paul Honeine and Hassan Amoud and Clovis Francis and Cédric Richard",\n   title =  "Constrained Pattern Recognition with Nonlinear Principal Component Analysis",\n   booktitle =  "Journées Scientifiques à l'Ecole Doctorale de Sciences et Technologie",\n   address =  "Liban",\n   year  =  "2010",\n   month =  "8 - 9~" # dec,\n   keywords  =  "machine learning, pre-image problem",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n StaRAC : Stationnarité Relative et Approches Connexes.\n \n \n \n\n\n \n Flandrin, P.; Borgnat, P.; Moghtaderi, A.; Richard, C.; Honeine, P.; Amoud, H.; Amblard, P.; and Ramirez-Cobo, P.\n\n\n \n\n\n\n In Grand Colloque STIC 2010, Paris - Cité des sciences et de l'industrie, France, 5 - 7 January 2010. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.StaRAC,\n   author =  "Patrick Flandrin and Pierre Borgnat and Azadeh Moghtaderi and Cédric Richard and Paul Honeine and Hassan Amoud and Pierre-Olivier Amblard and Pepa Ramirez-Cobo",\n   title =  "StaRAC : Stationnarité Relative et Approches Connexes",\n   booktitle =  "Grand Colloque {STIC} 2010",\n   address =  "Paris - Cité des sciences et de l'industrie, France",\n   year  =  "2010",\n   month =  "5 - 7~" # jan,\n   keywords  =  "non-stationarity",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n VigiRes'Eau : Surveillance en temps réel de la qualité de l'eau potable d'un réseau de distribution en vue de la détection d'intrusions.\n \n \n \n \n\n\n \n Fillatre, L.; Honeine, P.; Nikiforov, I.; Richard, C.; Snoussi, H.; and Azzaoui, N.\n\n\n \n\n\n\n In Workshop Interdisciplinaire sur la Sécurité Globale (WISG'10), (ANR - CSOSG), pages 1-7, Troyes, France, 26 - 27 January 2010. \n \n\n\n\n
\n\n\n\n \n \n \"VigiRes'Eau paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{10.wisg.vigireseau,\n   author =  "Lionel Fillatre and Paul Honeine and Igor Nikiforov and Cédric Richard and Hichem Snoussi and Nourddine Azzaoui",\n   title =  "VigiRes'Eau : Surveillance en temps réel de la qualité de l'eau potable d'un réseau de distribution en vue de la détection d'intrusions",\n   booktitle =  "Workshop Interdisciplinaire sur la Sécurité Globale (WISG'10), (ANR - CSOSG)",\n   address =  "Troyes, France",\n   year  =  "2010",\n   month =  "26 - 27~" # jan,\n   pages =  "1-7",\n   url_paper   =  "http://honeine.fr/paul/publi/10.vigireseau.pdf",\n   acronym =  "WISG",\n   keywords  =  "non-stationarity, adaptive filtering, machine learning",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n System and method for locating a target using a transceiver array (FR: Système et procédé de localisation de cible par un réseau d'émetteurs/récepteurs) (DE: System und verfahren zur ortung eines ziels anhand einer sende-/empfangsanordnung).\n \n \n \n \n\n\n \n Snoussi, H.; Richard, C.; and Honeine, P.\n\n\n \n\n\n\n 2010.\n \n\n\n\n
\n\n\n\n \n \n \"System link\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@patent{10.patent,\n  author  = {Hichem Snoussi and Cédric Richard and Paul Honeine},\n  title  = {System and method for locating a target using a transceiver array (FR: Système et procédé de localisation de cible par un réseau d'émetteurs/récepteurs) (DE: System und verfahren zur ortung eines ziels anhand einer sende-/empfangsanordnung)},\n  year   = {2010},\n  howpublished  = {WO/2010/119230, EP2419754 (Europe 2012), US9285456 (USA granted in 2016)},\n  url_link  = "http://www.wipo.int/pctdb/en/wo.jsp?WO=2010119230",\n  acronym =  "patent",\n  keywords  =  "wireless sensor networks",\n  abstract = "The invention relates to a system and a method for locating at least one target using au array of transceivers or sensors, in which at least a portion has a known geographic location, each comprising data processing means implementing at least one algorithm for locating a target, means for transmitting/receiving a signal that decreases with the distance, the sensor array covering at least one geographic area or area, characterized in that they implement for each instant (t) an exchange of data or similarity data between the sensors and a leading sensor, and a distribution determination of the probability of the location of the target using at least one reession algorithm on the basis of the similarity data.",\n}%"howpublished" = "number" in patent !\n\n% Reports and thesis\n\n
\n
\n\n\n
\n The invention relates to a system and a method for locating at least one target using au array of transceivers or sensors, in which at least a portion has a known geographic location, each comprising data processing means implementing at least one algorithm for locating a target, means for transmitting/receiving a signal that decreases with the distance, the sensor array covering at least one geographic area or area, characterized in that they implement for each instant (t) an exchange of data or similarity data between the sensors and a leading sensor, and a distribution determination of the probability of the location of the target using at least one reession algorithm on the basis of the similarity data.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2009\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Online prediction of time series data with kernels.\n \n \n \n \n\n\n \n Richard, C.; Bermudez, J. C. M.; and Honeine, P.\n\n\n \n\n\n\n IEEE Transactions on Signal Processing, 57(3): 1058 - 1067. March 2009.\n \n\n\n\n
\n\n\n\n \n \n \"Online link\n  \n \n \n \"Online paper\n  \n \n \n \"Online code\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 9 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{Ric09.tsp,\n   author =  "Cédric Richard and José C. M. Bermudez and Paul Honeine",\n   title =  "Online prediction of time series data with kernels",\n   journal =  "IEEE Transactions on Signal Processing",\n   year  =  "2009",\n   volume =  "57",\n   number =  "3",\n   pages =  "1058 - 1067",\n   month =  mar,\n   url_link= "https://ieeexplore.ieee.org/document/4685707",\n   doi="10.1109/TSP.2008.2009895", \n   url_paper  =  "http://www.honeine.fr/paul/publi/09.tsp.cr_jcb_ph.pdf",\n   url_code  =  "http://www.honeine.fr/paul/publi/09.tsp.cr_jcb_ph.zip",\n   keywords =  "adaptive filtering, machine learning, sparsity, adaptive filters, learning (artificial intelligence), mathematics computing, pattern recognition, prediction theory, regression analysis, time series, time series, kernel-based algorithms, machine learning, nonlinear problem, pattern recognition, density estimation, model reduction criterion, dictionaries, sparse approximation problems, kernel-based affine projection algorithm, Kernel, Filters, Nonlinear systems, Signal processing algorithms, Neural networks, Machine learning algorithms, Machine learning, Pattern recognition, Coherence, Least squares approximation, Adaptive filters, machine learning, nonlinear systems, pattern recognition",\n   abstract={Kernel-based algorithms have been a topic of considerable interest in the machine learning community over the last ten years. Their attractiveness resides in their elegant treatment of nonlinear problems. They have been successfully applied to pattern recognition, regression and density estimation. A common characteristic of kernel-based methods is that they deal with kernel expansions whose number of terms equals the number of input data, making them unsuitable for online applications. Recently, several solutions have been proposed to circumvent this computational burden in time series prediction problems. Nevertheless, most of them require excessively elaborate and costly operations. In this paper, we investigate a new model reduction criterion that makes computationally demanding sparsification procedures unnecessary. The increase in the number of variables is controlled by the coherence parameter, a fundamental quantity that characterizes the behavior of dictionaries in sparse approximation problems. We incorporate the coherence criterion into a new kernel-based affine projection algorithm for time series prediction. We also derive the kernel-based normalized LMS algorithm as a particular case. Finally, experiments are conducted to compare our approach to existing methods.}, \n}\n
\n
\n\n\n
\n Kernel-based algorithms have been a topic of considerable interest in the machine learning community over the last ten years. Their attractiveness resides in their elegant treatment of nonlinear problems. They have been successfully applied to pattern recognition, regression and density estimation. A common characteristic of kernel-based methods is that they deal with kernel expansions whose number of terms equals the number of input data, making them unsuitable for online applications. Recently, several solutions have been proposed to circumvent this computational burden in time series prediction problems. Nevertheless, most of them require excessively elaborate and costly operations. In this paper, we investigate a new model reduction criterion that makes computationally demanding sparsification procedures unnecessary. The increase in the number of variables is controlled by the coherence parameter, a fundamental quantity that characterizes the behavior of dictionaries in sparse approximation problems. We incorporate the coherence criterion into a new kernel-based affine projection algorithm for time series prediction. We also derive the kernel-based normalized LMS algorithm as a particular case. Finally, experiments are conducted to compare our approach to existing methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sur la caractérisation de non-stationnarités par la méthode des substituts.\n \n \n \n \n\n\n \n Amoud, H.; Richard, C.; Honeine, P.; Flandrin, P.; and Borgnat, P.\n\n\n \n\n\n\n In Actes du 22-ème Colloque GRETSI sur le Traitement du Signal et des Images, Dijon, France, September 2009. \n \n\n\n\n
\n\n\n\n \n \n \"Sur paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{09.gretsi.surrogates,\n   author =  "Hassan Amoud and Cédric Richard and Paul Honeine and Patrick Flandrin and Pierre Borgnat",\n   title =  "Sur la caractérisation de non-stationnarités par la méthode des substituts",\n   booktitle =  "Actes du 22-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Dijon, France",\n   year  =  "2009",\n   month =  sep,\n   keywords =  "Stationarity Test, Surrogates",\n   keywords  =  "non-stationarity",\n   acronym =  "GRETSI'09",\n   url_paper   =  "http://honeine.fr/paul/publi/09.gretsi.surrogates.pdf",\n   abstract = "The surrogate data technique generates a family of stationarized surrogate signals from the signal under investigation, enabling to derive a statistical test to reject or accept the null hypothesis of stationarity. In this paper, we examine how and to what extent this approach also allows us to characterize the type of non-stationarity of the signal under investigation. Beyond this general approach, we are interested in a class of signals modulated jointly in amplitude and frequency, all leading to the same surrogates. This approach provides the necessary framework for characterizing different forms of non-stationarity. Experimental results demonstrate the potential of the surrogate data technique to characterize different forms of non-stationarity, beyond its original role of deriving stationarity tests.",\n   x-abstract_fr ="La méthode des substituts consiste à générer des références stationnarisées d'un signal qui permettent, le cas échéant, de rejeter l'hypothèse nulle de stationnarité au terme d'un test statistique. On étudie dans quelle mesure cette approche permet de caractériser le type de non-stationnarité dont le signal testé ferait l'objet. Au-delà de l'approche générale elle-même, on s'intéresse à une classe de signaux mêlant modulations d'amplitude et de fréquence à des degrés respectifs choisis, et conduisant tous aux mêmes substituts. Ce socle commun offre le cadre nécessaire à la caractérisation de différentes formes de non-stationnarité. Les tests effectués montrent le potentiel de la méthode des substituts dans la caractérisation des formes de non-stationnarité, au-delà de son rôle originel dans l'élaboration de tests de stationnarité.",\n}\n
\n
\n\n\n
\n The surrogate data technique generates a family of stationarized surrogate signals from the signal under investigation, enabling to derive a statistical test to reject or accept the null hypothesis of stationarity. In this paper, we examine how and to what extent this approach also allows us to characterize the type of non-stationarity of the signal under investigation. Beyond this general approach, we are interested in a class of signals modulated jointly in amplitude and frequency, all leading to the same surrogates. This approach provides the necessary framework for characterizing different forms of non-stationarity. Experimental results demonstrate the potential of the surrogate data technique to characterize different forms of non-stationarity, beyond its original role of deriving stationarity tests.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Auto-localisation dans les réseaux de capteurs sans fil par régression de matrices de Gram.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; and Snoussi, H.\n\n\n \n\n\n\n In Actes du 22-ème Colloque GRETSI sur le Traitement du Signal et des Images, Dijon, France, September 2009. \n \n\n\n\n
\n\n\n\n \n \n \"Auto-localisation paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon09.gretsi.loc,\n   author =  "Paul Honeine and Cédric Richard and Hichem Snoussi",\n   title =  "Auto-localisation dans les réseaux de capteurs sans fil par régression de matrices de Gram",\n   booktitle =  "Actes du 22-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Dijon, France",\n   year  =  "2009",\n   month =  sep,\n   keywords  =  "machine learning, wireless sensor networks",\n   url_paper   =  "http://honeine.fr/paul/publi/09.gretsi.loc.pdf",\n   acronym =  "GRETSI'09",\n   abstract ="In the lack of position information on the nodes in a wireless sensor network, within the environment where they are deployed, the collected data may become obsolete. In this paper, we address the problem of self-localization of each of these nodes from inter-sensor measurements such as RSSI, and a few sensors called anchors whose position is known. Within the framework of reproducing kernel Hilbert spaces, we operate a matrix regression technique on two Gram matrices, one corresponding to the partially known relative positions of the sensors, the other gathering RSSI measures between each sensor. Two methods are derived to solve the problem, a centralized algorithm and a distributed one. The non-parametric approach proposed here makes it a particularly flexible technique adapted to sensor networks, as shown with experiments.",\n   x-abstract_fr ="En l'absence d'information sur la position des nœuds d'un réseau de capteurs sans fil, au sein de l'environnement où ils sont déployés, les données récoltées peuvent s'avérer d'une utilité limitée. Le problème traité concerne l'auto-localisation de chacun de ces nœuds à partir de mesures de portée inter-capteurs telles que les RSSI, et de quelques capteurs dits ancres dont la position est connue. On exploite une technique de régression matricielle reposant sur le formalisme des espaces de Hilbert à noyau reproduisant et impliquant deux matrices de Gram, l'une partiellement connue correspondant aux positions relatives des capteurs, l'autre regroupant les mesures de portée inter-capteurs. On propose deux méthodes de résolution du problème, l'une en mode centralisée et l'autre distribuée. Le caractère non-paramétrique de l'approche proposée lui confère une flexibilité particulièrement appréciable dans le cadre des réseaux de capteurs, comme illustré par des expérimentations.",\n}\n
\n
\n\n\n
\n In the lack of position information on the nodes in a wireless sensor network, within the environment where they are deployed, the collected data may become obsolete. In this paper, we address the problem of self-localization of each of these nodes from inter-sensor measurements such as RSSI, and a few sensors called anchors whose position is known. Within the framework of reproducing kernel Hilbert spaces, we operate a matrix regression technique on two Gram matrices, one corresponding to the partially known relative positions of the sensors, the other gathering RSSI measures between each sensor. Two methods are derived to solve the problem, a centralized algorithm and a distributed one. The non-parametric approach proposed here makes it a particularly flexible technique adapted to sensor networks, as shown with experiments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Apprentissage non-linéaire en ligne dans les réseaux de capteurs sans fil.\n \n \n \n \n\n\n \n Essoloh, M.; Honeine, P.; Richard, C.; and Snoussi, H.\n\n\n \n\n\n\n In Actes du 22-ème Colloque GRETSI sur le Traitement du Signal et des Images, Dijon, France, September 2009. \n \n\n\n\n
\n\n\n\n \n \n \"Apprentissage paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon09.gretsi.wsn,\n   author =  "Mehdi Essoloh and Paul Honeine and Cédric Richard and Hichem Snoussi",\n   title =  "Apprentissage non-linéaire en ligne dans les réseaux de capteurs sans fil",\n   booktitle =  "Actes du 22-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Dijon, France",\n   year  =  "2009",\n   month =  sep,\n   keywords  =  "machine learning, wireless sensor networks, sparsity, adaptive filtering",\n   url_paper   =  "http://honeine.fr/paul/publi/09.gretsi.wsn.pdf",\n   acronym =  "GRETSI'09",\n   abstract = "This paper deals with nonlinear online learning strategies in wireless sensor networks. The learning problem is solved in a reproducing kernel Hilbert space. Functional estimation is performed distributively using local measurements collected by sensors. Each node communicates with all its neighbors as dictated by the incremental or diffusion modes of cooperation. The algorithms are validated with synthetic data governed by the heat conduction equation. They provide accurate tracking results with low computational cost.",\n   x-abstract_fr ="Cet article présente plusieurs stratégies d'apprentissage en ligne d'une fonctionnelle non-linéaire dans un réseau de capteurs sans fil. Le problème d'apprentissage est défini dans le cadre des espaces de Hilbert à noyau reproduisant. L'estimation de la fonctionnelle y est pratiquée de manière distribuée sur la base des mesures locales collectées en chacun des capteurs, selon un mode coopératif de type incrémental ou par diffusion de l'information entre les nœuds du réseau. Les approches proposées sont testées sur des données synthétiques de diffusion de chaleur. Elles démontrent une excellente capacité de suivi des évolutions du système, tout en affichant des coûts calculatoires réduits.",\n}\n
\n
\n\n\n
\n This paper deals with nonlinear online learning strategies in wireless sensor networks. The learning problem is solved in a reproducing kernel Hilbert space. Functional estimation is performed distributively using local measurements collected by sensors. Each node communicates with all its neighbors as dictated by the incremental or diffusion modes of cooperation. The algorithms are validated with synthetic data governed by the heat conduction equation. They provide accurate tracking results with low computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Solving the pre-image problem in kernel machines: a direct method.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. 19th IEEE workshop on Machine Learning for Signal Processing (MLSP), pages 1-6, Grenoble, France, September 2009. \n - best paper award -\n\n\n\n
\n\n\n\n \n \n \"Solving link\n  \n \n \n \"Solving paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 4 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon09.mlsp,\n   author  = "Paul Honeine and Cédric Richard",\n   title  = "Solving the pre-image problem in kernel machines: a direct method",\n   booktitle  = "Proc. 19th IEEE workshop on Machine Learning for Signal Processing (MLSP)",\n   year  = "2009",\n   month  = sep,\n   address  = "Grenoble, France",\n   note  =  "- best paper award -",\n   pages={1-6}, \n   doi={10.1109/MLSP.2009.5306204},\n   keywords  =  "machine learning, pre-image problem",\n   url_link= "https://ieeexplore.ieee.org/document/5306204",\n   url_paper   =  "http://honeine.fr/paul/publi/09.mlsp.pdf",\n   acronym =  "MLSP",\n   abstract={In this paper, we consider the pre-image problem in kernel machines, such as denoising with kernel-PCA. For a given reproducing kernel Hilbert space (RKHS), by solving the pre-image problem one seeks a pattern whose image in the RKHS is approximately a given feature. Traditional techniques include an iterative technique (Mika et al.) and a multidimensional scaling (MDS) approach (Kwok et al.). In this paper, we propose a new technique to learn the pre-image. In the RKHS, we construct a basis having an isometry with the input space, with respect to a training data. Then representing any feature in this basis gives us information regarding its pre-image in the input space. We show that doing a pre-image can be done directly using the kernel values, without having to compute distances in any of the spaces as with the MDS approach. Simulation results illustrates the relevance of the proposed method, as we compare it to these techniques.}, \n   keywords={image denoising, principal component analysis, preimage problem, kernel machines, kernel-PCA, reproducing kernel Hilbert space, preimage denoising, Kernel, Noise reduction, Support vector machines, Training data, Space technology, Hilbert space, Multidimensional systems, Iterative methods, Computational modeling, Statistical learning, kernel machines, pre-image problem, kernel matrix regression, denoising}, \n   ISSN={1551-2541},    \n}\n
\n
\n\n\n
\n In this paper, we consider the pre-image problem in kernel machines, such as denoising with kernel-PCA. For a given reproducing kernel Hilbert space (RKHS), by solving the pre-image problem one seeks a pattern whose image in the RKHS is approximately a given feature. Traditional techniques include an iterative technique (Mika et al.) and a multidimensional scaling (MDS) approach (Kwok et al.). In this paper, we propose a new technique to learn the pre-image. In the RKHS, we construct a basis having an isometry with the input space, with respect to a training data. Then representing any feature in this basis gives us information regarding its pre-image in the input space. We show that doing a pre-image can be done directly using the kernel values, without having to compute distances in any of the spaces as with the MDS approach. Simulation results illustrates the relevance of the proposed method, as we compare it to these techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Time-Frequency Learning Machines For Nonstationarity Detection Using Surrogates.\n \n \n \n \n\n\n \n Amoud, H.; Honeine, P.; Richard, C.; Borgnat, P.; and Flandrin, P.\n\n\n \n\n\n\n In Proc. IEEE workshop on Statistical Signal Processing (SSP), pages 565-568, Cardiff (Wales), UK, 31 August–3 September 2009. \n \n\n\n\n
\n\n\n\n \n \n \"Time-Frequency link\n  \n \n \n \"Time-Frequency paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon09.ssp,\n   author =  "Hassan Amoud and Paul Honeine and Cédric Richard and Pierre Borgnat and Patrick Flandrin",\n   title =  "Time-Frequency Learning Machines For Nonstationarity Detection Using Surrogates",\n   booktitle =  "Proc. IEEE workshop on Statistical Signal Processing (SSP)",\n   address =  "Cardiff (Wales), UK",\n   year  =  "2009",\n   month =  "31~" # aug # "--" # "3~" # sep,\n   pages={565-568}, \n   doi={10.1109/SSP.2009.5278514},\n   url_link= "https://ieeexplore.ieee.org/document/5278514",\n   url_paper   =  "http://honeine.fr/paul/publi/09.ssp.pdf",\n   acronym =  "SSP",\n   abstract={An operational framework has recently been developed for testing stationarity of any signal relatively to an observation scale. The originality is to extract time-frequency features from a set of stationarized surrogate signals, and to use them for defining the null hypothesis of stationarity. Our paper is a further contribution that explores a general framework embedding techniques from machine learning and timefrequency analysis, called time-frequency learning machines. Based on one-class support vector machines, our approach uses entire time-frequency representations and does not require arbitrary feature extraction. Its relevance is illustrated by simulation results, and spherical multidimensional scaling techniques to map data to a visible 3D space.}, \n   keywords={Time-frequency analysis, Stationarity test, Kernel machines, One-class classification, Surrogates, non-stationarity, one-class, learning (artificial intelligence), signal detection, support vector machines, time-frequency learning machines, nonstationarity detection, stationarized surrogate signals, machine learning, timefrequency analysis, support vector machines, Time frequency analysis, Machine learning, Testing, Support vector machines, Feature extraction, Signal generators, Signal analysis, Fourier transforms, Support vector machine classification, Multidimensional systems, Time-frequency analysis, stationarity test, machine learning, one-class classification, surrogates}, \n   ISSN={2373-0803},\n}\n
\n
\n\n\n
\n An operational framework has recently been developed for testing stationarity of any signal relatively to an observation scale. The originality is to extract time-frequency features from a set of stationarized surrogate signals, and to use them for defining the null hypothesis of stationarity. Our paper is a further contribution that explores a general framework embedding techniques from machine learning and timefrequency analysis, called time-frequency learning machines. Based on one-class support vector machines, our approach uses entire time-frequency representations and does not require arbitrary feature extraction. Its relevance is illustrated by simulation results, and spherical multidimensional scaling techniques to map data to a visible 3D space.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Functional estimation in Hilbert space for distributed learning in wireless sensor networks.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; Bermudez, J. C. M.; Snoussi, H.; Essoloh, M.; and Vincent, F.\n\n\n \n\n\n\n In Proc. 34th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 2861-2864, Taipei, Taiwan, April 2009. \n \n\n\n\n
\n\n\n\n \n \n \"Functional link\n  \n \n \n \"Functional paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon09.icassp,\n   author =  "Paul Honeine and Cédric Richard and José C. M. Bermudez and Hichem Snoussi and Mehdi Essoloh and François Vincent",\n   title =  "Functional estimation in Hilbert space for distributed learning in wireless sensor networks",\n   booktitle =  "Proc. 34th IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Taipei, Taiwan",\n   month = apr,\n   year  =  "2009",\n   pages={2861-2864}, \n   doi={10.1109/ICASSP.2009.4960220}, \n   ISSN={1520-6149},\n   keywords  =  "machine learning, sparsity, adaptive filtering, wireless sensor networks",\n   url_link= "https://ieeexplore.ieee.org/document/4960220",\n   url_paper   =  "http://honeine.fr/paul/publi/09.icassp.pdf",\n   acronym =  "ICASSP",\n   abstract={In this paper, we propose a distributed learning strategy in wireless sensor networks. Taking advantage of recent developments on kernel-based machine learning, we consider a new sparsification criterion for online learning. As opposed to previously derived criteria, it is based on the estimated error and is therefore is well suited for tracking the evolution of systems over time. We also derive a gradient descent algorithm, and we demonstrate its relevance to estimate the dynamic evolution of temperature in a given region.}, \n   keywords={distributed algorithms, Hilbert spaces, intelligent sensors, learning (artificial intelligence), nonlinear systems, wireless sensor networks, functional estimation, Hilbert space, distributed learning, wireless sensor networks, machine learning, intelligent sensors, adaptive estimation, nonlinear systems, Hilbert space, Wireless sensor networks, Kernel, Temperature sensors, Acoustic sensors, Sensor phenomena and characterization, Machine learning, Intelligent sensors, Intelligent networks, Space technology, Intelligent sensors, adaptive estimation, distributed algorithms, nonlinear systems}, \n}\n
\n
\n\n\n
\n In this paper, we propose a distributed learning strategy in wireless sensor networks. Taking advantage of recent developments on kernel-based machine learning, we consider a new sparsification criterion for online learning. As opposed to previously derived criteria, it is based on the estimated error and is therefore is well suited for tracking the evolution of systems over time. We also derive a gradient descent algorithm, and we demonstrate its relevance to estimate the dynamic evolution of temperature in a given region.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2008\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Distribution temps-fréquence à paramétrisation radialement Gaussienne optimisée pour la classification.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n Traitement du signal. 2008.\n (invited paper)\n\n\n\n
\n\n\n\n \n \n \"Distribution link\n  \n \n \n \"Distribution paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{Hon08.ts,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "Distribution temps-fréquence à paramétrisation radialement Gaussienne optimisée pour la classification",\n   journal =  "Traitement du signal",\n   year  =  "2008",\n   issue = "6",\n   publisher = "GRETSI, Saint Martin d'Hères, France",\n   note  =  "(invited paper)",\n   url_link= "http://hdl.handle.net/2042/28620",\n   url_paper   =  "http://www.honeine.fr/paul/publi/08.ts.ph_cr.pdf",\n   keywords  =  "non-stationarity, time-frequency analysis, classification, radially Gaussian kernel, kernel-target alignment, machine learning, analyse temps-fréquence, classification, noyau, radialement Gaussien, critère d'alignement, reconnaissance des formes, Time-frequency analysis, radially Gaussian kernel, kernel-target alignment, machine learning",\n   abstract = "Cet article traite de l'optimisation des distributions temps-fréquence pour la résolution de problèmes de classification de signaux. On s'intéresse en particulier à la distribution à fonction de paramétrisation radialement gaussienne, que l'on ajuste par optimisation de l'alignement noyau-cible. Initialement développé pour la sélection de noyau reproduisant en Machine Learning, ce critère présente l'intérêt de ne nécessiter aucun cycle d'apprentissage. On montre que l'on peut obtenir la fonction de paramétrisation radialement gaussienne maximisant celui-ci en détournant une technique classique de réduction de termes interférentiels dans les représentations temps-fréquence. On illustre l'efficacité de cette approche à l'aide\n d'expérimentations.", \n}\n
\n
\n\n\n
\n Cet article traite de l'optimisation des distributions temps-fréquence pour la résolution de problèmes de classification de signaux. On s'intéresse en particulier à la distribution à fonction de paramétrisation radialement gaussienne, que l'on ajuste par optimisation de l'alignement noyau-cible. Initialement développé pour la sélection de noyau reproduisant en Machine Learning, ce critère présente l'intérêt de ne nécessiter aucun cycle d'apprentissage. On montre que l'on peut obtenir la fonction de paramétrisation radialement gaussienne maximisant celui-ci en détournant une technique classique de réduction de termes interférentiels dans les représentations temps-fréquence. On illustre l'efficacité de cette approche à l'aide d'expérimentations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed prediction of time series data with kernels and adaptive filtering techniques in sensor networks.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; Bermudez, J. C. M.; and Snoussi, H.\n\n\n \n\n\n\n In Proc. 42nd Annual Asilomar Conference on Signals, Systems and Computers (ASILOMAR), pages 246-250, Pacific Grove, CA, USA, October 2008. \n invited paper\n\n\n\n
\n\n\n\n \n \n \"Distributed link\n  \n \n \n \"Distributed paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon08.asilomar,\n   author =  "Paul Honeine and Cédric Richard and José C. M. Bermudez and Hichem Snoussi",\n   title =  "Distributed prediction of time series data with kernels and adaptive filtering techniques in sensor networks",\n   booktitle =  "Proc. 42nd Annual Asilomar Conference on Signals, Systems and Computers (ASILOMAR)",\n   address =  "Pacific Grove, CA, USA",\n   year  =  "2008",\n   month =  oct,\n   note  =  "invited paper",\n   pages={246-250}, \n   doi={10.1109/ACSSC.2008.5074401}, \n   ISSN={1058-6393},\n   url_link= "https://ieeexplore.ieee.org/document/5074401",\n   url_paper   =  "http://honeine.fr/paul/publi/08.asilomar.pdf",\n   acronym =  "Asilomar",\n   abstract={Wireless sensor networks are becoming versatile tools for learning a physical phenomenon, monitoring its variations and predicting its evolution. They rely on low-cost tiny devices which are deployed in the region under scrutiny and collaborate with each other. Limited computation and communication resources require special care in designing distributed prediction algorithms for sensor networks. In this communication, we propose a nonlinear prediction technique that takes advantage of recent developments in kernel machines and adaptive filtering for online nonlinear functional learning. Conventional methods, however, are inappropriate for large-scale sensor networks, as the resulting model corresponds to the number of deployed sensors. To circumvent these drawbacks, we consider a distributed control of the model order. The model parameters are transmitted from sensor to sensor and updated by each sensor based the measurement information. The model order is incremented whenever this increment is relevant compared to a fixed-order model. The proposed approach is naturally adapted for predicting a time-varying phenomenon, as model order increases are governed by the novelty of the new observation at each sensor node. We illustrate the applicability of the proposed technique by some simulations on establishing the temperature map in an region heated by sources.}, \n   keywords={machine learning, sparsity, adaptive filtering, wireless sensor networks, adaptive filters, operating system kernels, time series, wireless sensor networks, wireless sensor networks, distributed prediction, time series data, adaptive filtering, communication resources, nonlinear prediction, kernel machines, online nonlinear functional learning, large scale sensor networks, fixed order model, time varying phenomenon, Kernel, Adaptive filters, Sensor phenomena and characterization, Wireless sensor networks, Monitoring, Collaboration, Computer networks, Distributed computing, Algorithm design and analysis, Prediction algorithms}, \n}\n
\n
\n\n\n
\n Wireless sensor networks are becoming versatile tools for learning a physical phenomenon, monitoring its variations and predicting its evolution. They rely on low-cost tiny devices which are deployed in the region under scrutiny and collaborate with each other. Limited computation and communication resources require special care in designing distributed prediction algorithms for sensor networks. In this communication, we propose a nonlinear prediction technique that takes advantage of recent developments in kernel machines and adaptive filtering for online nonlinear functional learning. Conventional methods, however, are inappropriate for large-scale sensor networks, as the resulting model corresponds to the number of deployed sensors. To circumvent these drawbacks, we consider a distributed control of the model order. The model parameters are transmitted from sensor to sensor and updated by each sensor based the measurement information. The model order is incremented whenever this increment is relevant compared to a fixed-order model. The proposed approach is naturally adapted for predicting a time-varying phenomenon, as model order increases are governed by the novelty of the new observation at each sensor node. We illustrate the applicability of the proposed technique by some simulations on establishing the temperature map in an region heated by sources.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed localization in wireless sensor networks as a pre-image problem in a reproducing kernel Hilbert space.\n \n \n \n \n\n\n \n Essoloh, M.; Richard, C.; Snoussi, H.; and Honeine, P.\n\n\n \n\n\n\n In Proc. 16th European Conference on Signal Processing (EUSIPCO), pages 1-5, Lausanne, Switzerland, August 2008. \n \n\n\n\n
\n\n\n\n \n \n \"Distributed link\n  \n \n \n \"Distributed paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Ess08.eusipco,\n   author =  "Mehdi Essoloh and Cédric Richard and Hichem Snoussi and Paul Honeine",\n   title =  "Distributed localization in wireless sensor networks as a pre-image problem in a reproducing kernel Hilbert space",\n   booktitle =  "Proc. 16th European Conference on Signal Processing (EUSIPCO)",\n   address =  "Lausanne, Switzerland",\n   year  =  "2008",\n   month =  aug,\n   pages={1-5}, \n   ISSN={2219-5491},\n   url_link= "https://ieeexplore.ieee.org/document/7080511",\n   url_paper   =  "http://honeine.fr/paul/publi/08.eusipco.pdf",\n   acronym =  "EUSIPCO",\n   abstract={In this paper, we introduce a distributed strategy for localization in a wireless sensor network composed of limited range sensors. The proposed distributed algorithm provides sensor position estimation from local similarity measurements. Incremental Kernel Principal Component Analysis techniques are used to build the nonlinear manifold linking anchor nodes. Non-anchor nodes positions are estimated by the pre-image of their nonlinear projection onto this manifold. This non-linear strategy provides a great accuracy when data of interest are highly corrupted by noise and when sensors are not able to estimate their Euclidean inter-distances.}, \n   keywords={machine learning, sparsity, adaptive filtering, wireless sensor networks, estimation theory, Hilbert spaces, principal component analysis, sensor placement, wireless sensor networks, distributed localization, wireless sensor networks, preimage problem, kernel Hilbert space, limited range sensors, sensor position estimation, local similarity measurements, incremental kernel principal component analysis technique, nonlinear manifold linking anchor nodes, nonanchor node position estimation, Euclidean inter-distances, Sensors, Kernel, Signal processing algorithms, Wireless sensor networks, Principal component analysis, Vectors, Signal processing}, \n}\n
\n
\n\n\n
\n In this paper, we introduce a distributed strategy for localization in a wireless sensor network composed of limited range sensors. The proposed distributed algorithm provides sensor position estimation from local similarity measurements. Incremental Kernel Principal Component Analysis techniques are used to build the nonlinear manifold linking anchor nodes. Non-anchor nodes positions are estimated by the pre-image of their nonlinear projection onto this manifold. This non-linear strategy provides a great accuracy when data of interest are highly corrupted by noise and when sensors are not able to estimate their Euclidean inter-distances.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distributed regression in sensor networks with a reduced-order kernel model.\n \n \n \n \n\n\n \n Honeine, P.; Essoloh, M.; Richard, C.; and Snoussi, H.\n\n\n \n\n\n\n In Proc. 51st IEEE GLOBECOM Global Communications Conference, pages 1-5, New Orleans, LA, USA, 2008. \n \n\n\n\n
\n\n\n\n \n \n \"Distributed link\n  \n \n \n \"Distributed paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon08.globecom,\n   author =  "Paul Honeine and Mehdi Essoloh and Cédric Richard and Hichem Snoussi",\n   title =  "Distributed regression in sensor networks with a reduced-order kernel model",\n   booktitle =  "Proc. 51st IEEE GLOBECOM Global Communications Conference",\n   address =  "New Orleans, LA, USA",\n   year  =  "2008",\n   pages={1-5}, \n   doi={10.1109/GLOCOM.2008.ECP.29}, \n   ISSN={1930-529X},\n   url_link= "https://ieeexplore.ieee.org/document/4697804",\n   url_paper   =  "http://honeine.fr/paul/publi/08.globcom.pdf",\n   acronym =  "Globecom",\n   abstract={Over the past few years, wireless sensor networks received tremendous attention for monitoring physical phenomena, such as the temperature field in a given region. Applying conventional kernel regression methods for functional learning such as support vector machines is inappropriate for sensor networks, since the order of the resulting model and its computational complexity scales badly with the number of available sensors, which tends to be large. In order to circumvent this drawback, we propose in this paper a reduced-order model approach. To this end, we take advantage of recent developments in sparse representation literature, and show the natural link between reducing the model order and the topology of the deployed sensors. To learn this model, we derive a gradient descent scheme and show its efficiency for wireless sensor networks. We illustrate the proposed approach through simulations involving the estimation of a spatial temperature distribution.}, \n   keywords={machine learning, sparsity, adaptive filtering, wireless sensor networks, gradient methods, reduced order systems, regression analysis, wireless sensor networks, distributed regression, reduced-order kernel model, wireless sensor networks, kernel regression methods, gradient descent scheme, Kernel, Sensor phenomena and characterization, Wireless sensor networks, Monitoring, Temperature sensors, Machine learning, Support vector machines, Computational modeling, Computational complexity, Reduced order systems}, \n}\n
\n
\n\n\n
\n Over the past few years, wireless sensor networks received tremendous attention for monitoring physical phenomena, such as the temperature field in a given region. Applying conventional kernel regression methods for functional learning such as support vector machines is inappropriate for sensor networks, since the order of the resulting model and its computational complexity scales badly with the number of available sensors, which tends to be large. In order to circumvent this drawback, we propose in this paper a reduced-order model approach. To this end, we take advantage of recent developments in sparse representation literature, and show the natural link between reducing the model order and the topology of the deployed sensors. To learn this model, we derive a gradient descent scheme and show its efficiency for wireless sensor networks. We illustrate the proposed approach through simulations involving the estimation of a spatial temperature distribution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Localization in sensor networks - A matrix regression approach.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; Essoloh, M.; and Snoussi, H.\n\n\n \n\n\n\n In Proc. 5th IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM), pages 284-287, Darmstadt, Germany, July 2008. \n \n\n\n\n
\n\n\n\n \n \n \"Localization link\n  \n \n \n \"Localization paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon08.sam,\n   author =  "Paul Honeine and Cédric Richard and Mehdi Essoloh and Hichem Snoussi",\n   title =  "Localization in sensor networks - A matrix regression approach",\n   booktitle =  "Proc. 5th IEEE Sensor Array and Multichannel Signal Processing Workshop (SAM)",\n   address =  "Darmstadt, Germany",\n   year  =  "2008",\n   month =  jul,\n   pages={284-287}, \n   doi={10.1109/SAM.2008.4606873},\n   url_link= "https://ieeexplore.ieee.org/document/4606873",\n   url_paper   =  "http://honeine.fr/paul/publi/08.sam.pdf",\n   acronym =  "SAM",\n   abstract={In this paper, we propose a new approach to sensor localization problems, based on recent developments in machine leaning. The main idea behind it is to consider a matrix regression method between the ranging matrix and the matrix of inner products between positions of sensors, in order to complete the latter. Once we have learnt this regression from information between sensors of known positions (beacons), we apply it to sensors of unknown positions. Retrieving the estimated positions of the latter can be done by solving a linear system. We propose a distributed algorithm, where each sensor positions itself with information available from its nearby beacons. The proposed method is validated by experimentations.}, \n   keywords={machine learning, wireless sensor networks, matrix algebra, regression analysis, wireless sensor networks, sensor networks, matrix regression approach, sensor localization problems, machine leaning, ranging matrix, linear system, distributed algorithm, Distance measurement, Kernel, Optimization, Distributed algorithms, Wireless sensor networks, Network topology, Topology}, \n   ISSN={1551-2282}, \n}\n
\n
\n\n\n
\n In this paper, we propose a new approach to sensor localization problems, based on recent developments in machine leaning. The main idea behind it is to consider a matrix regression method between the ranging matrix and the matrix of inner products between positions of sensors, in order to complete the latter. Once we have learnt this regression from information between sensors of known positions (beacons), we apply it to sensors of unknown positions. Retrieving the estimated positions of the latter can be done by solving a linear system. We propose a distributed algorithm, where each sensor positions itself with information available from its nearby beacons. The proposed method is validated by experimentations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Distributed learning in wireless sensor networks.\n \n \n \n\n\n \n Richard, C.; Honeine, P.; Snoussi, H.; Essoloh, M.; and Bermudez, J. C. M.\n\n\n \n\n\n\n In 5th Workshop on Sensor Networks (CNRS RECAP Sensor and Self-Organized Networks), 13 - 14 November 2008. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{08.RECAP,\n   author =  "Cédric Richard and Paul Honeine and Hichem Snoussi and Mehdi Essoloh and José C. M. Bermudez",\n   title =  "Distributed learning in wireless sensor networks",\n   booktitle =  "5th Workshop on Sensor Networks (CNRS RECAP Sensor and Self-Organized Networks)",\n   year  =  "2008",\n   month =  "13 - 14~" # nov,\n   keywords  =  "machine learning, adaptive filtering, wireless sensor networks",\n}\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sur l'usage de critères de représentation parcimonieuse pour la RdF par méthodes à noyau.\n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Journée représentations parcimonieuses, journées thématiques au GdR ISIS, 17 April 2008. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{08.GdR,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "Sur l'usage de critères de représentation parcimonieuse pour la RdF par méthodes à noyau",\n   booktitle =  "Journée représentations parcimonieuses, journées thématiques au GdR ISIS",\n   year  =  "2008",\n   month =  "17~" # apr,\n   keywords  =  "machine learning, sparsity, adaptive filtering",\n}\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2007\n \n \n (7)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Time-Frequency Learning Machines.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; and Flandrin, P.\n\n\n \n\n\n\n IEEE Transactions on Signal Processing, 55(7): 3930 - 3936. July 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Time-Frequency link\n  \n \n \n \"Time-Frequency paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@ARTICLE{Hon07.tsp,\n   author =  "Paul Honeine and Cédric Richard and Patrick Flandrin",\n   title =  "Time-Frequency Learning Machines",\n   journal =  "IEEE Transactions on Signal Processing",\n   year  =  "2007",\n   volume =  "55",\n   pages =  "3930 - 3936",\n   number={7},\n   month =  jul,\n   url_link= "https://ieeexplore.ieee.org/document/4244686",\n   doi = "10.1109/TSP.2007.894252",\n   url_paper  =  "http://www.honeine.fr/paul/publi/07.tsp.ph_cr_pf.pdf",\n   keywords =  "Time-frequency analysis, Support vector machines, machine learning, non-stationarity, machine learning, learning (artificial intelligence), signal processing, support vector machines, time-frequency analysis, time-frequency learning machines, kernel learning machines, time-frequency analysis, time-frequency domain, nonstationary signal analysis, pattern recognition, statistical learning theory, Time frequency analysis, Machine learning, Kernel, Signal processing algorithms, Pattern recognition, Support vector machines, Signal analysis, Statistical learning, Machine learning algorithms, Computational efficiency, Kernel machines, learning theory, support vector machines, time-frequency analysis",\nabstract={Over the last decade, the theory of reproducing kernels has made a major breakthrough in the field of pattern recognition. It has led to new algorithms, with improved performance and lower computational cost, for nonlinear analysis in high dimensional feature spaces. Our paper is a further contribution which extends the framework of the so-called kernel learning machines to time-frequency analysis, showing that some specific reproducing kernels allow these algorithms to operate in the time-frequency domain. This link offers new perspectives in the field of non-stationary signal analysis, which can benefit from the developments of pattern recognition and statistical learning theory.},\n}\n\n\n%: Conferences\n\n\n
\n
\n\n\n
\n Over the last decade, the theory of reproducing kernels has made a major breakthrough in the field of pattern recognition. It has led to new algorithms, with improved performance and lower computational cost, for nonlinear analysis in high dimensional feature spaces. Our paper is a further contribution which extends the framework of the so-called kernel learning machines to time-frequency analysis, showing that some specific reproducing kernels allow these algorithms to operate in the time-frequency domain. This link offers new perspectives in the field of non-stationary signal analysis, which can benefit from the developments of pattern recognition and statistical learning theory.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modélisation parcimonieuse non linéaire en ligne par une méthode à noyau reproduisant et un critère de cohérence.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; and Bermudez, J. C. M.\n\n\n \n\n\n\n In Actes du XXI-ème Colloque GRETSI sur le Traitement du Signal et des Images, Troyes, France, September 2007. \n \n\n\n\n
\n\n\n\n \n \n \"Modélisation paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 2 downloads\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon07.gretsi.b,\n   author =  "Paul Honeine and Cédric Richard and José C. M. Bermudez",\n   title =  "Modélisation parcimonieuse non linéaire en ligne par une méthode à noyau reproduisant et un critère de cohérence",\n   booktitle =  "Actes du XXI-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Troyes, France",\n   year  =  "2007",\n   month =  sep,\n   keywords  =  "machine learning, sparsity, adaptive filtering",\n   acronym =  "GRETSI'07",\n   url_paper   =  "http://honeine.fr/paul/publi/07.gretsi_b.pdf",\n   abstract = "In this article, we consider non-linear and non-stationary system identification with kernel-based methods. Such techniques require model order control, both for sparse result and online applications. For this purpose, we consider the coherence criterion, initially introduced by the sparse decomposition community. This parameter gives new insights into the model properties, with a linear computational complexity with respect to its order, whereas that of classical techniques is quadratic. We provide a recursive-least-squares algorithm, and present some results on synthetic and real time series.",\n   x-abstract_fr = "Cet article traite du problème d'identification de systèmes non linéaires et non stationnaires par des méthodes à noyau reproduisant. Les approches de ce type nécessitent un contrôle en ligne de l'ordre du modèle considéré. Notre approche exploite le critère de cohérence, issu des techniques de décomposition parcimonieuse. Elle permet le contrôle de l'ordre du modèle à noyau reproduisant avec un coût calculatoire linéaire par rapport à celui-ci, contrairement aux techniques existantes qui sont à complexité quadratique. On illustre cette approche par un algorithme de moindres carrés récursif, avec des simulations sur des modèles synthétiques et réels.",\n}\n
\n
\n\n\n
\n In this article, we consider non-linear and non-stationary system identification with kernel-based methods. Such techniques require model order control, both for sparse result and online applications. For this purpose, we consider the coherence criterion, initially introduced by the sparse decomposition community. This parameter gives new insights into the model properties, with a linear computational complexity with respect to its order, whereas that of classical techniques is quadratic. We provide a recursive-least-squares algorithm, and present some results on synthetic and real time series.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distribution temps-fréquence à noyau radialement Gaussien : optimisation pour la classification par le critère d'alignement noyau-cible.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Actes du XXI-ème Colloque GRETSI sur le Traitement du Signal et des Images, Troyes, France, September 2007. \n \n\n\n\n
\n\n\n\n \n \n \"Distribution paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon07.gretsi.a,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "Distribution temps-fréquence à noyau radialement Gaussien : optimisation pour la classification par le critère d'alignement noyau-cible",\n   booktitle =  "Actes du XXI-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Troyes, France",\n   year =  "2007",\n   month =  sep,\n   keywords =  "non-stationarity, machine learning",\n   acronym =  "GRETSI'07",\n   url_paper  =  "http://honeine.fr/paul/publi/07.gretsi_a.pdf",\n   abstract = "In this article, we design optimal time-frequency distributions for classification. Our approach is based on the kernel-target alignment criterion, which has been investigated in the framework of kernel-based machines for selecting optimal reproducing kernels. One of its main interests is that it does not need any computationally intensive training stage and crossvalidation process. We take advantage of this criterion to tune radially Gaussian kernel, and consider a classical optimization technique usually used for reducing interference terms of time-frequency distributions. We illustrate our approach with some experimental results.",\n   x-abstract_fr = "Cet article traite de l'ajustement des paramètres des distributions temps-fréquence pour la résolution d'un problème de classification de signaux. On s'intéresse en particulier à la distribution à noyau radialement Gaussien. On exploite le critère d'alignement noyau-cible, développé pour la sélection du noyau reproduisant dans le cadre des méthodes à noyau. Celui-ci présente l'intérêt de ne nécessiter aucun apprentissage de la statistique de décision. On adapte le critère d'alignement noyau-cible au noyau radialement Gaussien, en détournant une technique classique de réduction de termes interférentiels dans les représentations temps-fréquence. On illustre cette approche par des expérimentations de classification de signaux non-stationnaires.",\n }\n
\n
\n\n\n
\n In this article, we design optimal time-frequency distributions for classification. Our approach is based on the kernel-target alignment criterion, which has been investigated in the framework of kernel-based machines for selecting optimal reproducing kernels. One of its main interests is that it does not need any computationally intensive training stage and crossvalidation process. We take advantage of this criterion to tune radially Gaussian kernel, and consider a classical optimization technique usually used for reducing interference terms of time-frequency distributions. We illustrate our approach with some experimental results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Signal-dependent time-frequency representations for classification using a radially gaussian kernel and the alignment criterion.\n \n \n \n \n\n\n \n Honeine, P.; and Richard, C.\n\n\n \n\n\n\n In Proc. IEEE workshop on Statistical Signal Processing (SSP), pages 735 - 739, Madison, WI, USA, August 2007. \n \n\n\n\n
\n\n\n\n \n \n \"Signal-dependent link\n  \n \n \n \"Signal-dependent paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon07.ssp,\n   author =  "Paul Honeine and Cédric Richard",\n   title =  "Signal-dependent time-frequency representations for classification using a radially gaussian kernel and the alignment criterion",\n   booktitle =  "Proc. IEEE workshop on Statistical Signal Processing (SSP)",\n   address =  "Madison, WI, USA",\n   year =  "2007",\n   month =  aug,\n   pages={735 - 739},\n   doi={10.1109/SSP.2007.4301356},\n   url_link= "https://ieeexplore.ieee.org/document/4301356",\n   url_paper  =  "http://honeine.fr/paul/publi/07.ssp.pdf",\n   acronym =  "SSP",\n   abstract={In this paper, we propose a method for tuning time-frequency distributions with radially Gaussian kernel within a classification framework. It is based on a criterion that has recently emerged from the machine learning literature: the kernel-target alignement. Our optimization scheme is very similar to that proposed by Baraniuk and Jones for signal-dependent time-frequency analysis. The relevance of this approach of improving time-frequency classification accuracy is illustrated through examples.}, \n   keywords={Time frequency analysis, Kernel, Signal analysis, Machine learning, Support vector machines, Support vector machine classification, Signal design, Interference, Pattern recognition, Computational efficiency}, \n   ISSN={2373-0803}, \n}\n
\n
\n\n\n
\n In this paper, we propose a method for tuning time-frequency distributions with radially Gaussian kernel within a classification framework. It is based on a criterion that has recently emerged from the machine learning literature: the kernel-target alignement. Our optimization scheme is very similar to that proposed by Baraniuk and Jones for signal-dependent time-frequency analysis. The relevance of this approach of improving time-frequency classification accuracy is illustrated through examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On-line nonlinear sparse approximation of functions.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; and Bermudez, J. C. M.\n\n\n \n\n\n\n In Proc. IEEE International Symposium on Information Theory (ISIT), pages 956 - 960, Nice, France, June 2007. \n \n\n\n\n
\n\n\n\n \n \n \"On-line link\n  \n \n \n \"On-line paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon07.isit,\n   author =  "Paul Honeine and Cédric Richard and José C. M. Bermudez",\n   title =  "On-line nonlinear sparse approximation of functions",\n   booktitle =  "Proc. IEEE International Symposium on Information Theory (ISIT)",\n   address =  "Nice, France",\n   year =  "2007",\n   month =  jun,\n   pages =  "956 - 960",\n   doi={10.1109/ISIT.2007.4557347},\n   url_link= "https://ieeexplore.ieee.org/document/4557347",\n   url_paper  =  "http://honeine.fr/paul/publi/07.isit.pdf",\n   acronym =  "ISIT",\n   abstract={This paper provides new insights into on-line nonlinear sparse approximation of functions based on the coherence criterion. We revisit previous work, and propose tighter bounds on the approximation error based on the coherence criterion. Moreover, we study the connections between the coherence criterion and both the approximate linear dependence criterion and the principal component analysis. Finally, we derive a kernel normalized LMS algorithm based on the coherence criterion, which has linear computational complexity on the model order. Initial experimental results are presented on the performance of the algorithm.}, \n   keywords={machine learning, sparsity, adaptive filtering, approximation theory, computational complexity, function approximation, least mean squares methods, online nonlinear sparse approximation, function approximation, coherence criterion, approximate linear dependence criterion, principal component analysis, kernel normalized LMS algorithm, linear computational complexity, model order, least mean squares method, Kernel, Coherence, Least squares approximation, Computational complexity, Filtering algorithms, Training data, Approximation error, Linear approximation, Principal component analysis, Dictionaries}, \n   ISSN={2157-8095}, \n}\n
\n
\n\n\n
\n This paper provides new insights into on-line nonlinear sparse approximation of functions based on the coherence criterion. We revisit previous work, and propose tighter bounds on the approximation error based on the coherence criterion. Moreover, we study the connections between the coherence criterion and both the approximate linear dependence criterion and the principal component analysis. Finally, we derive a kernel normalized LMS algorithm based on the coherence criterion, which has linear computational complexity on the model order. Initial experimental results are presented on the performance of the algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Filtrage adaptatif non linéaire par méthode à noyau.\n \n \n \n\n\n \n Richard, C.; and Honeine, P.\n\n\n \n\n\n\n In Journée signal, reconnaissance des formes et machines à noyaux, journées thématiques au GdR ISIS, 8 June 2007. \n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{07.GdR,\n   author =  "Cédric Richard and Paul Honeine",\n   title =  "Filtrage adaptatif non linéaire par méthode à noyau",\n   booktitle =  "Journée signal, reconnaissance des formes et machines à noyaux, journées thématiques au GdR ISIS",\n   year  =  "2007",\n   month =  "8~" # jun,\n   keywords  =  "machine learning, sparsity, adaptive filtering",\n}\n\n% Brevet/Patent\n\n\n
\n
\n\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Méthodes à noyau pour l'analyse et la décision en environnement non-stationnaire.\n \n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n Ph.D. Thesis, mémoire de thèse de doctorat en Optimisation et Sûreté des Systèmes, Ecole doctoral SSTO - UTT, Troyes, France, December 2007.\n \n\n\n\n
\n\n\n\n \n \n \"Méthodes paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@PHDTHESIS{07.PhD,\n   author =  "Paul Honeine",\n   title =  "Méthodes à noyau pour l'analyse et la décision en environnement non-stationnaire",\n   school =  "mémoire de thèse de doctorat en Optimisation et Sûreté des Systèmes, {Ecole doctoral SSTO - UTT}",\n   address =  "Troyes, France",\n   year  =  "2007",\n   month = dec,\n   url_paper   =  "http://honeine.fr/paul/publi/these_P_HONEINE.pdf",\n   keywords  =  "non-stationarity, machine learning, sparsity, adaptive filtering",\n}\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2006\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Optimal selection of time-frequency representations for signal classification: A kernel-target alignment approach.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; Flandrin, P.; and Pothin, J.\n\n\n \n\n\n\n In Proc. 31st IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toulouse, France, May 2006. \n \n\n\n\n
\n\n\n\n \n \n \"Optimal link\n  \n \n \n \"Optimal paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon06.icassp,\n   author =  "Paul Honeine and Cédric Richard and Patrick Flandrin and Jean-Baptiste Pothin",\n   title =  "Optimal selection of time-frequency representations for signal classification: A kernel-target alignment approach",\n   booktitle =  "Proc. 31st IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)",\n   address =  "Toulouse, France",\n   month =  may,\n   year =  "2006",\n   doi={10.1109/ICASSP.2006.1660694}, \n   ISSN={1520-6149},\n   acronym =  "ICASSP",\n   url_link= "https://ieeexplore.ieee.org/document/1660694",\n   url_paper  =  "http://honeine.fr/paul/publi/06.icassp.pdf",\n   abstract={In this paper, we propose a method for selecting time-frequency distributions appropriate for given learning tasks. It is based on a criterion that has recently emerged from the machine learning literature: the kernel-target alignment. This criterion makes possible to find the optimal representation for a given classification problem without designing the classifier itself. Some possible applications of our framework are discussed. The first one provides a computationally attractive way of adjusting the free parameters of a distribution to improve classification performance. The second one is related to the selection, from a set of candidates, of the distribution that best facilitates a classification task. The last one addresses the problem of optimally combining several distributions}, \n   keywords={Time-frequency analysis, Kernel machines, Classification, Optimal representation, Support vector machines, non-stationarity, machine learning, learning (artificial intelligence), signal classification, signal representation, time-frequency analysis, time-frequency representations, signal classification, kernel-target alignment approach, machine learning, time-frequency distributions, Time frequency analysis, Pattern classification, Kernel, Hilbert space, Machine learning, Distributed computing, Support vector machines, Support vector machine classification, Appropriate technology, Signal analysis}, \n}\n
\n
\n\n\n
\n In this paper, we propose a method for selecting time-frequency distributions appropriate for given learning tasks. It is based on a criterion that has recently emerged from the machine learning literature: the kernel-target alignment. This criterion makes possible to find the optimal representation for a given classification problem without designing the classifier itself. Some possible applications of our framework are discussed. The first one provides a computationally attractive way of adjusting the free parameters of a distribution to improve classification performance. The second one is related to the selection, from a set of candidates, of the distribution that best facilitates a classification task. The last one addresses the problem of optimally combining several distributions\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2005\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Reconnaissance des formes par méthodes à noyau dans le domaine temps-fréquence.\n \n \n \n \n\n\n \n Honeine, P.; Richard, C.; and Flandrin, P.\n\n\n \n\n\n\n In Actes du XX-ème Colloque GRETSI sur le Traitement du Signal et des Images, pages 969 - 972, Louvain-la-Neuve, Belgium, 2005. \n \n\n\n\n
\n\n\n\n \n \n \"Reconnaissance paper\n  \n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n\n
\n
@INPROCEEDINGS{Hon05.gretsi,\n   author =  "Paul Honeine and Cédric Richard and Patrick Flandrin",\n   title =  "Reconnaissance des formes par méthodes à noyau dans le domaine temps-fréquence",\n   booktitle =  "Actes du XX-ème Colloque GRETSI sur le Traitement du Signal et des Images",\n   address =  "Louvain-la-Neuve, Belgium",\n   year =  "2005",\n   pages =  "969 - 972",\n   keywords =  "Time-frequency analysis, Kernel machines, Support vector machines, non-stationarity, machine learning",\n   acronym =  "GRETSI'05",\n   url_paper  =  "http://honeine.fr/paul/publi/05.gretsi.pdf",\n   abstract = "During the last decade, many important advances have been made in the field of pattern recognition with the theory of reproducing kernels. This unified view has led to new algorithms with improved performance and lower computational complexity. In this paper, we show that some specific reproducing kernels allow these algorithms to operate on time-frequency representations. This link offers new perspectives in the field of non-stationary signal analysis since it provides an access to the most recent methodological and theoretical developments of pattern recognition and statistical learning theory.",\n   x-abstract_fr = "Au cours de la dernière décennie, de nombreuses méthodes pour l'analyse et la classification de données fondées sur la théorie des noyaux reproduisants ont été proposées. Elles constituent une source de progrès, tant au niveau de la complexité algorithmique que des performances atteintes. On montre dans cet article qu'un choix approprié de noyau reproduisant permet à ces méthodes d'opérer sur des représentations temps- fréquence. Ce lien ouvre de multiples perspectives au domaine de l'analyse des signaux non-stationnaires puisqu'il lui permet d'accéder aux plus récentes avancées méthodologiques et théoriques en matière de reconnaissance des formes et de théorie de l'apprentissage.",\n}\n\n\n% Workshops\n\n
\n
\n\n\n
\n During the last decade, many important advances have been made in the field of pattern recognition with the theory of reproducing kernels. This unified view has led to new algorithms with improved performance and lower computational complexity. In this paper, we show that some specific reproducing kernels allow these algorithms to operate on time-frequency representations. This link offers new perspectives in the field of non-stationary signal analysis since it provides an access to the most recent methodological and theoretical developments of pattern recognition and statistical learning theory.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n
\n
\n  \n 2003\n \n \n (1)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n Théorie de l'information pour l'analyse du typage sonore de véhicules.\n \n \n \n\n\n \n Honeine, P.\n\n\n \n\n\n\n Master's thesis, mémoire de DEA, UTT (LM2S) – PSA Peugeot Citroen (centre DRIA/SARA/EMSA/PEFH), Troyes, France, 2003.\n \n\n\n\n
\n\n\n\n \n\n \n\n \n link\n  \n \n\n bibtex\n \n\n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n\n\n\n
\n
@MASTERsTHESIS{03.MSc,\n   author =  "Paul Honeine",\n   title =  "Théorie de l'information pour l'analyse du typage sonore de véhicules",\n   school =  "mémoire de DEA, UTT (LM2S) -- PSA Peugeot Citroen (centre DRIA/SARA/EMSA/PEFH)",\n   address =  "Troyes, France",\n   year  =  "2003",\n}\n\n\n\n\n
\n
\n\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);