var bibbase_data = {"data":"\"Loading..\"\n\n
\n\n \n\n \n\n \n \n\n \n\n \n \n\n \n\n \n
\n generated by\n \n \"bibbase.org\"\n\n \n
\n \n\n
\n\n \n\n\n
\n\n Excellent! Next you can\n create a new website with this list, or\n embed it in an existing web page by copying & pasting\n any of the following snippets.\n\n
\n JavaScript\n (easiest)\n
\n \n <script src=\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2018url.bib&jsonp=1&jsonp=1\"></script>\n \n
\n\n PHP\n
\n \n <?php\n $contents = file_get_contents(\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2018url.bib&jsonp=1\");\n print_r($contents);\n ?>\n \n
\n\n iFrame\n (not recommended)\n
\n \n <iframe src=\"https://bibbase.org/show?bib=https%3A%2F%2Fraw.githubusercontent.com%2FRoznn%2FEUSIPCO%2Fmain%2Feusipco2018url.bib&jsonp=1\"></iframe>\n \n
\n\n

\n For more details see the documention.\n

\n
\n
\n\n
\n\n This is a preview! To use this list on your own web site\n or create a new web site from it,\n create a free account. The file will be added\n and you will be able to edit it in the File Manager.\n We will show you instructions once you've created your account.\n
\n\n
\n\n

To the site owner:

\n\n

Action required! Mendeley is changing its\n API. In order to keep using Mendeley with BibBase past April\n 14th, you need to:\n

    \n
  1. renew the authorization for BibBase on Mendeley, and
  2. \n
  3. update the BibBase URL\n in your page the same way you did when you initially set up\n this page.\n
  4. \n
\n

\n\n

\n \n \n Fix it now\n

\n
\n\n
\n\n\n
\n \n \n
\n
\n  \n 2018\n \n \n (540)\n \n \n
\n
\n \n \n
\n \n\n \n \n \n \n \n \n Improved Discrete Grey Wolf Optimizer.\n \n \n \n \n\n\n \n Martin, B.; Marot, J.; and Bourennane, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 494-498, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552925,\n  author = {B. Martin and J. Marot and S. Bourennane},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved Discrete Grey Wolf Optimizer},\n  year = {2018},\n  pages = {494-498},\n  abstract = {Grey wolf optimizer (GWO) is a bioinspired iterative optimization algorithm which simulates the hunting process of a wolf pack guided by three leaders. In this paper, a novel discrete GWO is proposed: a random leader selection is performed, and the probability for the main leader to be selected increases at the detriment of the other leaders across iterations. The proposed discrete GWO is compared to another discrete version of GWO, using standard test functions.},\n  keywords = {grey systems;iterative methods;optimisation;probability;discrete GWO;standard test functions;probability;random leader selection;bioinspired iterative optimization algorithm;discrete grey wolf optimizer;Indexes;Signal processing algorithms;Signal processing;Europe;Optimization methods;Performance evaluation;bio-inspired optimization;discrete space;grey wolf},\n  doi = {10.23919/EUSIPCO.2018.8552925},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570429913.pdf},\n}\n\n
\n
\n\n\n
\n Grey wolf optimizer (GWO) is a bioinspired iterative optimization algorithm which simulates the hunting process of a wolf pack guided by three leaders. In this paper, a novel discrete GWO is proposed: a random leader selection is performed, and the probability for the main leader to be selected increases at the detriment of the other leaders across iterations. The proposed discrete GWO is compared to another discrete version of GWO, using standard test functions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Numerical stability of spline-based Gabor-like systems.\n \n \n \n \n\n\n \n Onchis, D. M.; Zappalà, S.; Real, P.; and Istin, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1337-1341, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NumericalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552927,\n  author = {D. M. Onchis and S. {Zappalà} and P. Real and C. Istin},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Numerical stability of spline-based Gabor-like systems},\n  year = {2018},\n  pages = {1337-1341},\n  abstract = {The paper provides a theorem for the characterization of numerical stability of spline-type systems. These systems are generated through shifted copies of a given atom over a time lattice. Also, we reformulate the well known Gabor systems via modulated spline-type systems and we apply the corresponding numerical stability to these systems. The numerical stability is tested for consistency against deformations.},\n  keywords = {Gabor filters;numerical stability;splines (mathematics);numerical stability;spline-based Gabor-like system;time lattice;shifted copies;modulated spline-type systems;Splines (mathematics);Numerical stability;Lattices;Strain;Time-frequency analysis;Signal processing;Fourier transforms;spline-type spaces;numerical stability;Gabor systems},\n  doi = {10.23919/EUSIPCO.2018.8552927},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438106.pdf},\n}\n\n
\n
\n\n\n
\n The paper provides a theorem for the characterization of numerical stability of spline-type systems. These systems are generated through shifted copies of a given atom over a time lattice. Also, we reformulate the well known Gabor systems via modulated spline-type systems and we apply the corresponding numerical stability to these systems. The numerical stability is tested for consistency against deformations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Exterior and Interior Sound Field Separation Using Convex Optimization: Comparison of Signal Models.\n \n \n \n \n\n\n \n Takida, Y.; Koyama, S.; and Saruwataril, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2549-2553, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ExteriorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552928,\n  author = {Y. Takida and S. Koyama and H. Saruwataril},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Exterior and Interior Sound Field Separation Using Convex Optimization: Comparison of Signal Models},\n  year = {2018},\n  pages = {2549-2553},\n  abstract = {An exterior (direct source) and interior (reverber-ant) sound field separation method using a convex optimization algorithm is proposed. Extracting the exterior sound field from mixed observations using multiple microphones can be an effective preprocessing approach to analyzing the sound field inside a region including sources in a reverberant environment. We formulate signal models of the exterior and interior sound fields by exploiting the signal characteristics of each sound field. The interior sound field is sparsely represented using overcomplete plane- wave functions. Two models using harmonic functions and a low-rank structure are proposed for the exterior sound field. The separation algorithms for each model are derived by the alternating direction method of multipliers. Numerical simulation results indicate that higher separation accuracy than that for existing methods can be achieved by the proposed method with a small number of microphones and a flexible microphone arrangement.},\n  keywords = {acoustic field;acoustic signal processing;blind source separation;convex programming;numerical analysis;optimisation;reverberation;interior sound field;signal models;convex optimization algorithm;exterior sound field;exterior sound fields;interior sound fields;separation algorithms;Microphones;Harmonic analysis;Signal processing algorithms;Convex functions;Computational modeling;Europe},\n  doi = {10.23919/EUSIPCO.2018.8552928},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436950.pdf},\n}\n\n
\n
\n\n\n
\n An exterior (direct source) and interior (reverber-ant) sound field separation method using a convex optimization algorithm is proposed. Extracting the exterior sound field from mixed observations using multiple microphones can be an effective preprocessing approach to analyzing the sound field inside a region including sources in a reverberant environment. We formulate signal models of the exterior and interior sound fields by exploiting the signal characteristics of each sound field. The interior sound field is sparsely represented using overcomplete plane- wave functions. Two models using harmonic functions and a low-rank structure are proposed for the exterior sound field. The separation algorithms for each model are derived by the alternating direction method of multipliers. Numerical simulation results indicate that higher separation accuracy than that for existing methods can be achieved by the proposed method with a small number of microphones and a flexible microphone arrangement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Assessing Tissue Heterogeneity by non-Gaussian Measures in a Permeable Environment.\n \n \n \n \n\n\n \n Brusini, L.; Menegaz, G.; and Nilsson, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1147-1151, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AssessingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552929,\n  author = {L. Brusini and G. Menegaz and M. Nilsson},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Assessing Tissue Heterogeneity by non-Gaussian Measures in a Permeable Environment},\n  year = {2018},\n  pages = {1147-1151},\n  abstract = {In diffusion MRI, the deviation of the Ensemble Average Propagator (EAP) from Gaussianity conveys information about the microstructural heterogeneity within an imaging voxel. Different measures have been proposed for assessing this heterogeneity. This paper assesses the performance of the Diffusional Kurtosis Imaging (DKI) and Simple Harmonics Oscillator Reconstruction and Estimation (SHORE) approaches using Monte Carlo simulations of water diffusion within synthetic axons with a permeable myelin sheath. The aim was also to understand the impact of myelin features such as its number of wrappings and relaxation (T2) rate on MR-observable parameters. To this end, a substrate consisting of parallel cylinders coated by a multi-layer sheet was considered, and simulations were used to generate the synthetic diffusion-weighted signal. Results show that myelin features affects the parameters quantified by both DKI and SHORE. A strong agreement was found between DKI and SHORE parameters, highlighting the consistency of the methods in characterising the diffusion-weighted signal.},\n  keywords = {biodiffusion;biological tissues;biomedical MRI;medical image processing;Monte Carlo methods;neurophysiology;tissue heterogeneity;permeable environment;diffusion MRI;EAP;microstructural heterogeneity;imaging voxel;Diffusional Kurtosis Imaging;DKI;Monte Carlo simulations;water diffusion;synthetic axons;permeable myelin sheath;myelin features;MR-observable parameters;parallel cylinders;multilayer sheet;synthetic diffusion-weighted signal;SHORE parameters;nonGaussian measures;ensemble average propagator;simple harmonics oscillator reconstruction and estimation approach;Axons;Correlation;Market research;Europe;Signal processing;Imaging;Indexes;weighting;SHORE;DKI;Non Gaussianity;Kurtosis},\n  doi = {10.23919/EUSIPCO.2018.8552929},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437912.pdf},\n}\n\n
\n
\n\n\n
\n In diffusion MRI, the deviation of the Ensemble Average Propagator (EAP) from Gaussianity conveys information about the microstructural heterogeneity within an imaging voxel. Different measures have been proposed for assessing this heterogeneity. This paper assesses the performance of the Diffusional Kurtosis Imaging (DKI) and Simple Harmonics Oscillator Reconstruction and Estimation (SHORE) approaches using Monte Carlo simulations of water diffusion within synthetic axons with a permeable myelin sheath. The aim was also to understand the impact of myelin features such as its number of wrappings and relaxation (T2) rate on MR-observable parameters. To this end, a substrate consisting of parallel cylinders coated by a multi-layer sheet was considered, and simulations were used to generate the synthetic diffusion-weighted signal. Results show that myelin features affects the parameters quantified by both DKI and SHORE. A strong agreement was found between DKI and SHORE parameters, highlighting the consistency of the methods in characterising the diffusion-weighted signal.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Fast Endmember Estimation Algorithm from Compressive Measurements.\n \n \n \n \n\n\n \n Vargas, E.; Pinilla, S.; and Arguello, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2210-2214, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552930,\n  author = {E. Vargas and S. Pinilla and H. Arguello},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Fast Endmember Estimation Algorithm from Compressive Measurements},\n  year = {2018},\n  pages = {2210-2214},\n  abstract = {This paper deals with estimating the endmembers in a linear mixing model (LMM) of a hyperspectral image, from measurements acquired with compressive spectral imaging (CSI) devices. For this problem, a novel approach is developed exploiting the Rayleigh-Ritz (RR) theory to approximate the signal subspace where the data lie and the fact that the endmembers are located at the vertices of a simplex set under a LMM. The proposed approach first estimates a subset of eigenvectors to approximate the signal subspace using the RR theory, and then vertex component analysis is applied to find the endmembers in the approximated subspace. Simulations results conducted on realistic compressive hyperspectral images show that the proposed algorithm can provide endmembers results very close to those obtained when using uncompressed images, with the advantage of using a reduced number of measurements. In particular, the numerical tests show that the proposed approach is able to estimate the endmembers using 50% of the full data.},\n  keywords = {data compression;feature extraction;hyperspectral imaging;spectral analysis;fast endmember estimation algorithm;compressive measurements;linear mixing model;LMM;compressive spectral imaging devices;Rayleigh-Ritz theory;RR theory;vertex component analysis;approximated subspace;realistic compressive hyperspectral images;uncompressed images;signal subspace;Image coding;Covariance matrices;Hyperspectral imaging;Signal processing algorithms;Eigenvalues and eigenfunctions;Imaging},\n  doi = {10.23919/EUSIPCO.2018.8552930},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437392.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with estimating the endmembers in a linear mixing model (LMM) of a hyperspectral image, from measurements acquired with compressive spectral imaging (CSI) devices. For this problem, a novel approach is developed exploiting the Rayleigh-Ritz (RR) theory to approximate the signal subspace where the data lie and the fact that the endmembers are located at the vertices of a simplex set under a LMM. The proposed approach first estimates a subset of eigenvectors to approximate the signal subspace using the RR theory, and then vertex component analysis is applied to find the endmembers in the approximated subspace. Simulations results conducted on realistic compressive hyperspectral images show that the proposed algorithm can provide endmembers results very close to those obtained when using uncompressed images, with the advantage of using a reduced number of measurements. In particular, the numerical tests show that the proposed approach is able to estimate the endmembers using 50% of the full data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Hierarchical Ensemble Classifier for Multilabel Segmentation of Fat-Water MR Images.\n \n \n \n \n\n\n \n Fallah, F.; Yang, B.; Walter, S. S.; and Bamberg, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 41-45, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552932,\n  author = {F. Fallah and B. Yang and S. S. Walter and F. Bamberg},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Hierarchical Ensemble Classifier for Multilabel Segmentation of Fat-Water MR Images},\n  year = {2018},\n  pages = {41-45},\n  abstract = {Automatic segmentation of organs on fat-water magnetic resonance (MR) images not only enables an analysis of their morphological characteristics but also their tissues pathogenesis demonstrated by their fat fraction ratios. So far, only a few methods have been designed based on these images and all proposed segmentation algorithms have only addressed one organ at a time. In this paper, we propose a hierarchical deformation-/registration-free algorithm for multilabel segmentation of fat-water MR images without need to prior localizations or geometry estimations. This method involved a hierarchical random forest classifier and a hierarchical conditional random field (CRF) encoding a multi-resolution image pyramid. This pyramid was formed by extracting multiscale local and contextual features from image patches at different resolutions. The classifier used penalized multivariate linear discriminants and SMOTEBagging to mitigate limited and imbalanced training data. The CRF refined the segmentations with regard to the spatial and hierarchical consistencies of the labels by using layer-specific significant features identified over the trained random forest classifier. Also, we incorporated resolution-specific hyperparameters to handle variable numbers or class mixtures of the image patches over hierarchical structures. This method was trained and evaluated for segmenting 10 thoracic and 5 lumbar VBs and IVDs on 30 training and 30 test volumetric fat-water (2 channel) MR images. Objective evaluations revealed its comparable accuracy to the state-of-the-art while demanding less computational burden.},\n  keywords = {biological organs;biomedical MRI;fats;feature extraction;image classification;image coding;image resolution;image segmentation;learning (artificial intelligence);medical image processing;layer-specific significant features;trained random forest classifier;resolution-specific hyperparameters;hierarchical structures;test volumetric fat-water MR images;hierarchical deformation-registration-free algorithm;hierarchical consistencies;spatial consistencies;imbalanced training data;multivariate linear discriminants;image patches;contextual features;multiscale local features;multiresolution image pyramid;CRF;hierarchical conditional random field;hierarchical random forest classifier;geometry estimations;organ;segmentation algorithms;fat fraction ratios;tissues pathogenesis;morphological characteristics;fat-water magnetic resonance images;multilabel segmentation;hierarchical ensemble classifier;Image segmentation;Feature extraction;Fats;Spatial resolution;Training;Signal resolution},\n  doi = {10.23919/EUSIPCO.2018.8552932},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437360.pdf},\n}\n\n
\n
\n\n\n
\n Automatic segmentation of organs on fat-water magnetic resonance (MR) images not only enables an analysis of their morphological characteristics but also their tissues pathogenesis demonstrated by their fat fraction ratios. So far, only a few methods have been designed based on these images and all proposed segmentation algorithms have only addressed one organ at a time. In this paper, we propose a hierarchical deformation-/registration-free algorithm for multilabel segmentation of fat-water MR images without need to prior localizations or geometry estimations. This method involved a hierarchical random forest classifier and a hierarchical conditional random field (CRF) encoding a multi-resolution image pyramid. This pyramid was formed by extracting multiscale local and contextual features from image patches at different resolutions. The classifier used penalized multivariate linear discriminants and SMOTEBagging to mitigate limited and imbalanced training data. The CRF refined the segmentations with regard to the spatial and hierarchical consistencies of the labels by using layer-specific significant features identified over the trained random forest classifier. Also, we incorporated resolution-specific hyperparameters to handle variable numbers or class mixtures of the image patches over hierarchical structures. This method was trained and evaluated for segmenting 10 thoracic and 5 lumbar VBs and IVDs on 30 training and 30 test volumetric fat-water (2 channel) MR images. Objective evaluations revealed its comparable accuracy to the state-of-the-art while demanding less computational burden.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Diarization and Separation Based on a Data-Driven Simplex.\n \n \n \n \n\n\n \n Laufer-Goldshtein, B.; Talmon, R.; and Gannot, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 842-846, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DiarizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552933,\n  author = {B. Laufer-Goldshtein and R. Talmon and S. Gannot},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Diarization and Separation Based on a Data-Driven Simplex},\n  year = {2018},\n  pages = {842-846},\n  abstract = {Separation of underdetermined speech mixtures, where the number of speakers is greater than the number of microphones, is a challenging task. Due to the intermittent behaviour of human conversations, typically, the instantaneous number of active speakers does not exceed the number of microphones, namely the mixture is locally (over-)determined. This scenario is addressed in this paper using a dual stage approach: diarization followed by separation. The diarization stage is based on spectral decomposition of the correlation matrix between different time frames. Specifically, the spectral gap reveals the overall number of speakers, and the computed eigenvectors form a simplex of the activity of the speakers across time. In the separation stage, the diarization results are utilized for estimating the mixing acoustic channels, as well as for constructing an unmixing scheme for extracting the individual speakers. The performance is demonstrated in a challenging scenario with six speakers and only four microphones. The proposed method shows perfect recovery of the overall number of speakers, close to perfect diarization accuracy, and high separation capabilities in various reverberation conditions.},\n  keywords = {blind source separation;eigenvalues and eigenfunctions;microphones;reverberation;speaker recognition;data-driven simplex;underdetermined speech mixtures;microphones;intermittent behaviour;human conversations;instantaneous number;dual stage approach;diarization stage;spectral decomposition;spectral gap;separation stage;diarization results;diarization accuracy;reverberation conditions;Microphones;Matrix decomposition;Correlation;Acoustics;Eigenvalues and eigenfunctions;Europe;Signal processing;Blind audio source separation (BASS);diarization;relative transfer function (RTF);simplex},\n  doi = {10.23919/EUSIPCO.2018.8552933},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436730.pdf},\n}\n\n
\n
\n\n\n
\n Separation of underdetermined speech mixtures, where the number of speakers is greater than the number of microphones, is a challenging task. Due to the intermittent behaviour of human conversations, typically, the instantaneous number of active speakers does not exceed the number of microphones, namely the mixture is locally (over-)determined. This scenario is addressed in this paper using a dual stage approach: diarization followed by separation. The diarization stage is based on spectral decomposition of the correlation matrix between different time frames. Specifically, the spectral gap reveals the overall number of speakers, and the computed eigenvectors form a simplex of the activity of the speakers across time. In the separation stage, the diarization results are utilized for estimating the mixing acoustic channels, as well as for constructing an unmixing scheme for extracting the individual speakers. The performance is demonstrated in a challenging scenario with six speakers and only four microphones. The proposed method shows perfect recovery of the overall number of speakers, close to perfect diarization accuracy, and high separation capabilities in various reverberation conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the Impact of Tensor Completion in the Classification of Undersampled Hyperspectral Imagery.\n \n \n \n \n\n\n \n Giannopoulos, M.; Tsagkatakis, G.; and Tsakalides, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1975-1979, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552934,\n  author = {M. Giannopoulos and G. Tsagkatakis and P. Tsakalides},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On the Impact of Tensor Completion in the Classification of Undersampled Hyperspectral Imagery},\n  year = {2018},\n  pages = {1975-1979},\n  abstract = {Typical HSI sensors employ scanning along certain dimensions in order to acquire the hyperspectral data cube. Snapshot Spectral Imaging architectures associate a particular spectral band with each pixel, achieving high temporal sampling rates at a lower spatial resolution. In this work, we study the problem of efficient estimation of missing hyperspectral measurements and we evaluate the impact of the reconstruction quality on the subsequent task of classification. We explore two cutting edge techniques for undersampled signal recovery, namely matrix and tensor completion, and we evaluate their performance on hyperspectral data recovery. Furthermore, we quantify the effects of the reconstruction error on state-of-the-art machine learning algorithms via metrics such as classification accuracy and F1-score. The results demonstrate that robust and efficient classification is feasible, even from a substantially reduced number of measurements being available, especially when emerging deep learning approaches are adopted. Moreover, significant gains are obtained when exploring higher order structural information via tensor modelling, as compared to low order matrix-based methods.},\n  keywords = {geophysical image processing;hyperspectral imaging;image classification;image reconstruction;image resolution;image sensors;learning (artificial intelligence);matrix algebra;remote sensing;sampling methods;tensors;tensor modelling;low order matrix-based methods;tensor completion;undersampled hyperspectral imagery;hyperspectral data cube;hyperspectral measurements;cutting edge techniques;undersampled signal recovery;hyperspectral data recovery;reconstruction error;robust classification;deep learning approaches;higher order structural information;temporal sampling rates;spatial resolution;quality reconstruction;machine learning algorithms;HSI sensors;snapshot spectral imaging architectures;spectral band;Tensile stress;Training;Hyperspectral imaging;Image reconstruction;Measurement;Optimization},\n  doi = {10.23919/EUSIPCO.2018.8552934},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436084.pdf},\n}\n\n
\n
\n\n\n
\n Typical HSI sensors employ scanning along certain dimensions in order to acquire the hyperspectral data cube. Snapshot Spectral Imaging architectures associate a particular spectral band with each pixel, achieving high temporal sampling rates at a lower spatial resolution. In this work, we study the problem of efficient estimation of missing hyperspectral measurements and we evaluate the impact of the reconstruction quality on the subsequent task of classification. We explore two cutting edge techniques for undersampled signal recovery, namely matrix and tensor completion, and we evaluate their performance on hyperspectral data recovery. Furthermore, we quantify the effects of the reconstruction error on state-of-the-art machine learning algorithms via metrics such as classification accuracy and F1-score. The results demonstrate that robust and efficient classification is feasible, even from a substantially reduced number of measurements being available, especially when emerging deep learning approaches are adopted. Moreover, significant gains are obtained when exploring higher order structural information via tensor modelling, as compared to low order matrix-based methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Approximate Recovery of Initial Point-like and Instantaneous Sources from Coarsely Sampled Thermal Fields via Infinite-Dimensional Compressed Sensing.\n \n \n \n \n\n\n \n Flinth, A.; and Hashemi, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1720-1724, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ApproximatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552939,\n  author = {A. Flinth and A. Hashemi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Approximate Recovery of Initial Point-like and Instantaneous Sources from Coarsely Sampled Thermal Fields via Infinite-Dimensional Compressed Sensing},\n  year = {2018},\n  pages = {1720-1724},\n  abstract = {We propose a method for resolving the positions of the initial source of heat propagation fields. The method relies on the recent theory of compressed sensing off the grid, i.e. TV- minimization. Based on the so-called soft recovery framework, we are able to derive rigorous theoretical guarantees of approximate recovery of the positions. Numerical experiments show a satisfactory performance of our method.},\n  keywords = {compressed sensing;heat conduction;initial point-like;instantaneous sources;coarsely sampled thermal fields;heat propagation fields;compressed sensing;TV- minimization;soft recovery framework;approximate recovery;infinite-dimensional compressed sensing;TV;Compressed sensing;Pollution measurement;Sensors;Europe;Signal processing;Heating systems;Infinite-dimensional compressed sensing;Super-resolution;Thermal field;High-coherent dictionary},\n  doi = {10.23919/EUSIPCO.2018.8552939},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437057.pdf},\n}\n\n
\n
\n\n\n
\n We propose a method for resolving the positions of the initial source of heat propagation fields. The method relies on the recent theory of compressed sensing off the grid, i.e. TV- minimization. Based on the so-called soft recovery framework, we are able to derive rigorous theoretical guarantees of approximate recovery of the positions. Numerical experiments show a satisfactory performance of our method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Roundoff Noise Analysis for Generalized Direct-Form II Structure of 2-D Separable-Denominator Digital Filters.\n \n \n \n \n\n\n \n Hinamoto, T.; Doi, A.; and Lu, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 450-454, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RoundoffPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552940,\n  author = {T. Hinamoto and A. Doi and W. Lu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Roundoff Noise Analysis for Generalized Direct-Form II Structure of 2-D Separable-Denominator Digital Filters},\n  year = {2018},\n  pages = {450-454},\n  abstract = {Based on the concept of polynomial operators, generalized direct-form II structure of two-dimensional (2-D) separable-denominator (SD) digital filters is explored. It is shown that 2-D SD digital filters can be modeled by a generalized SIMO direct- form II and a generalized MISO transposed direct-form II that are connected in cascade. Then an expression for the roundoff noise gain in the resulting structure is derived and investigated. Moreover, the roundoff noise gain is compared with that deduced in a recent study of generalized direct-form II realization of 2-D SD digital filters.},\n  keywords = {digital filters;IIR filters;polynomials;roundoff errors;state-space methods;transfer functions;two-dimensional digital filters;roundoff noise analysis;generalized direct-form II structure;2-D separable-denominator digital filters;2-D SD digital filters;generalized SIMO direct- form II;generalized MISO;roundoff noise gain;direct-form II realization;Mathematical model;Transfer functions;Europe;Two dimensional displays;Digital filters;Quantization (signal)},\n  doi = {10.23919/EUSIPCO.2018.8552940},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433695.pdf},\n}\n\n
\n
\n\n\n
\n Based on the concept of polynomial operators, generalized direct-form II structure of two-dimensional (2-D) separable-denominator (SD) digital filters is explored. It is shown that 2-D SD digital filters can be modeled by a generalized SIMO direct- form II and a generalized MISO transposed direct-form II that are connected in cascade. Then an expression for the roundoff noise gain in the resulting structure is derived and investigated. Moreover, the roundoff noise gain is compared with that deduced in a recent study of generalized direct-form II realization of 2-D SD digital filters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Linear Convergence of Stochastic Block-Coordinate Fixed Point Algorithms.\n \n \n \n \n\n\n \n Combettes, P. L.; and Pesquet, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 742-746, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LinearPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552941,\n  author = {P. L. Combettes and J. Pesquet},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Linear Convergence of Stochastic Block-Coordinate Fixed Point Algorithms},\n  year = {2018},\n  pages = {742-746},\n  abstract = {Recent random block-coordinate fixed point algorithms are particularly well suited to large-scale optimization in signal and image processing. These algorithms feature random sweeping rules to select arbitrarily the blocks of variables that are activated over the course of the iterations and they allow for stochastic errors in the evaluation of the operators. The present paper provides new linear convergence results. These convergence rates are compared to those of standard deterministic algorithms both theoretically and experimentally in an image recovery problem.},\n  keywords = {convergence of numerical methods;deterministic algorithms;image denoising;iterative methods;optimisation;stochastic processes;standard deterministic algorithms;stochastic block-coordinate fixed point algorithms;image processing;random sweeping rules;linear convergence;random block-coordinate fixed point algorithms;iterations;signal processing;image recovery problem;Convergence;Signal processing algorithms;Random variables;Signal processing;Standards;Europe;Clustering algorithms;Block-coordinate algorithm;fixed-point algorithm;linear convergence;stochastic algorithm},\n  doi = {10.23919/EUSIPCO.2018.8552941},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436777.pdf},\n}\n\n
\n
\n\n\n
\n Recent random block-coordinate fixed point algorithms are particularly well suited to large-scale optimization in signal and image processing. These algorithms feature random sweeping rules to select arbitrarily the blocks of variables that are activated over the course of the iterations and they allow for stochastic errors in the evaluation of the operators. The present paper provides new linear convergence results. These convergence rates are compared to those of standard deterministic algorithms both theoretically and experimentally in an image recovery problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-parametric characterization of gravitational-wave polarizations.\n \n \n \n \n\n\n \n Flamant, J.; Chainais, P.; Chassande-Mottin, E.; Feng, F.; and Bihan, N. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2658-2662, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Non-parametricPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552942,\n  author = {J. Flamant and P. Chainais and E. Chassande-Mottin and F. Feng and N. L. Bihan},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Non-parametric characterization of gravitational-wave polarizations},\n  year = {2018},\n  pages = {2658-2662},\n  abstract = {Gravitational waves are polarized. Their polarization is essential to characterize the physical and dynamical properties of the source i.e., a coalescing binary of two compact objects such as black holes or neutron stars. Observations with two or more non coaligned detectors like Virgo and LIGO allow to reconstruct the two polarization components usually denoted by h+(t) and h×(t). The amplitude and phase relationship between the two components is related to the source orientation with respect to the observer. Therefore the evolution of the polarization pattern provides evidence for changes in the orientation due to precession or nutation of the binary. Usually, some specific direct dynamical model is exploited to identify the physical parameters of such binaries. Recently, a new framework for the time-frequency analysis of bivariate signals based on a quaternion Fourier transform has been introduced in [1]. It permits to analyze the bivariate signal combining h+(t) and h×(t) by defining its quaternion embedding as well as a set of non-parametric observables, namely Stokes parameters. These parameters are remarkably capable of measuring fine properties of the source, in particular by deciphering precession, without close bounds to a specific dynamical model.},\n  keywords = {black holes;gravitational wave detectors;gravitational waves;neutron stars;nonparametric characterization;gravitational-wave polarizations;binary physical parameters;Stokes parameters;nonparametric observables;quaternion Fourier transform;bivariate signal;time-frequency analysis;specific direct dynamical model;polarization pattern;source orientation;phase relationship;polarization components;noncoaligned detectors;neutron stars;black holes;compact objects;coalescing binary;dynamical properties;physical properties;Quaternions;Orbits;Stokes parameters;Observers;Detectors;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8552942},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437781.pdf},\n}\n\n
\n
\n\n\n
\n Gravitational waves are polarized. Their polarization is essential to characterize the physical and dynamical properties of the source i.e., a coalescing binary of two compact objects such as black holes or neutron stars. Observations with two or more non coaligned detectors like Virgo and LIGO allow to reconstruct the two polarization components usually denoted by h+(t) and h×(t). The amplitude and phase relationship between the two components is related to the source orientation with respect to the observer. Therefore the evolution of the polarization pattern provides evidence for changes in the orientation due to precession or nutation of the binary. Usually, some specific direct dynamical model is exploited to identify the physical parameters of such binaries. Recently, a new framework for the time-frequency analysis of bivariate signals based on a quaternion Fourier transform has been introduced in [1]. It permits to analyze the bivariate signal combining h+(t) and h×(t) by defining its quaternion embedding as well as a set of non-parametric observables, namely Stokes parameters. These parameters are remarkably capable of measuring fine properties of the source, in particular by deciphering precession, without close bounds to a specific dynamical model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fusion of Community Structures in Multiplex Networks by Label Constraints.\n \n \n \n \n\n\n \n Huang, Y.; Panahi, A.; Krim, H.; and Dai, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 887-891, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FusionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552943,\n  author = {Y. Huang and A. Panahi and H. Krim and L. Dai},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fusion of Community Structures in Multiplex Networks by Label Constraints},\n  year = {2018},\n  pages = {887-891},\n  abstract = {We develop a Belief Propagation algorithm for community detection problem in multiplex networks, which more accurately represents many real-world systems. Previous works have established that real world multiplex networks exhibit redundant structures/communities, and that community detection performance improves by aggregating (fusing) redundant layers which are generated from the same Stochastic Block Model (SBM). We introduce a probability model for generic multiplex networks, aiming to fuse community structure across layers, without assuming or seeking the same SBM generative model for different layers. Numerical experiment shows that our model finds out consistent communities between layers and yields a significant detectability improvement over the single layer architecture. Our model also achieves a comparable performance to a reference model where we assume consistent communities in prior. Finally we compare our method with multilayer modularity optimization in heterogeneous networks, and show that our method detects correct community labels more reliably.},\n  keywords = {belief networks;network theory (graphs);optimisation;probability;stochastic processes;community structure;SBM generative model;single layer architecture;heterogeneous networks;label constraints;community detection problem;real-world systems;probability model;generic multiplex networks;belief propagation algorithm;stochastic block model;real world multiplex networks;Multiplexing;Mathematical model;Belief propagation;Numerical models;Signal processing algorithms;Periodic structures;Inference algorithms;Community detection;Multiplex network;Fusion;Belief propagation;SBM},\n  doi = {10.23919/EUSIPCO.2018.8552943},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437584.pdf},\n}\n\n
\n
\n\n\n
\n We develop a Belief Propagation algorithm for community detection problem in multiplex networks, which more accurately represents many real-world systems. Previous works have established that real world multiplex networks exhibit redundant structures/communities, and that community detection performance improves by aggregating (fusing) redundant layers which are generated from the same Stochastic Block Model (SBM). We introduce a probability model for generic multiplex networks, aiming to fuse community structure across layers, without assuming or seeking the same SBM generative model for different layers. Numerical experiment shows that our model finds out consistent communities between layers and yields a significant detectability improvement over the single layer architecture. Our model also achieves a comparable performance to a reference model where we assume consistent communities in prior. Finally we compare our method with multilayer modularity optimization in heterogeneous networks, and show that our method detects correct community labels more reliably.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Neuroevolution: Training Deep Neural Networks for False Alarm Detection in Intensive Care Units.\n \n \n \n \n\n\n \n Hooman, O. M. J.; Al-Rifaie, M. M.; and Nicolaou, M. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1157-1161, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552944,\n  author = {O. M. J. Hooman and M. M. Al-Rifaie and M. A. Nicolaou},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Neuroevolution: Training Deep Neural Networks for False Alarm Detection in Intensive Care Units},\n  year = {2018},\n  pages = {1157-1161},\n  abstract = {We present a neuroevolution based-approach for training neural networks based on genetic algorithms, as applied to the problem of detecting false alarms in Intensive Care Units (ICU) based on physiological data. Typically, optimisation in neural networks is performed via backpropagation (BP) with stochastic gradient-based learning. Nevertheless, recent works (c.f., [1]) have shown promising results in terms of utilising gradient-free, population-based genetic algorithms, suggesting that in certain cases gradient-based optimisation is not the best approach to follow. In this paper, we empirically show that utilising evolutionary and swarm intelligence algorithms can improve the performance of deep neural networks in problems such as the detection of false alarms in ICU. In more detail, we present results that improve the state-of-the-art accuracy on the corresponding Physionet challenge, while reducing the number of suppressed true alarms by deploying and adapting Dispersive Flies Optimisation (DFO).},\n  keywords = {backpropagation;genetic algorithms;gradient methods;medical computing;neural nets;stochastic processes;ICU;deep neuroevolution;false alarm detection;physiological data;swarm intelligence algorithms;deep neural networks;intensive care units;genetic algorithms;gradient-based optimisation;evolutionary intelligence algorithms;backpropagation;stochastic gradient-based learning;physionet;dispersive flies optimisation;Optimization;Artificial neural networks;Sociology;Statistics;Electrocardiography;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8552944},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439321.pdf},\n}\n\n
\n
\n\n\n
\n We present a neuroevolution based-approach for training neural networks based on genetic algorithms, as applied to the problem of detecting false alarms in Intensive Care Units (ICU) based on physiological data. Typically, optimisation in neural networks is performed via backpropagation (BP) with stochastic gradient-based learning. Nevertheless, recent works (c.f., [1]) have shown promising results in terms of utilising gradient-free, population-based genetic algorithms, suggesting that in certain cases gradient-based optimisation is not the best approach to follow. In this paper, we empirically show that utilising evolutionary and swarm intelligence algorithms can improve the performance of deep neural networks in problems such as the detection of false alarms in ICU. In more detail, we present results that improve the state-of-the-art accuracy on the corresponding Physionet challenge, while reducing the number of suppressed true alarms by deploying and adapting Dispersive Flies Optimisation (DFO).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of Pitch Targets from Speech Signals by Joint Regularized Optimization.\n \n \n \n \n\n\n \n Birkholz, P.; Schmaser, P.; and Xu, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2075-2079, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552945,\n  author = {P. Birkholz and P. Schmaser and Y. Xu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of Pitch Targets from Speech Signals by Joint Regularized Optimization},\n  year = {2018},\n  pages = {2075-2079},\n  abstract = {This paper presents a novel method to estimate the pitch target parameters of the target approximation model (TAM). The TAM allows the compact representation of natural pitch contours on a solid theoretical basis and can be used as an intonation model for text-to-speech synthesis. In contrast to previous approaches, the method proposed here estimates the parameters of all targets jointly, uses 5th-order (instead of 3rd-order) linear systems to model the target approximation process, and uses regularization to avoid unnatural pitch targets. The effect of these features on the modeling error and the target parameter distributions are shown. The proposed method has been made available as the open-source software tool TargetOptimizer.},\n  keywords = {approximation theory;optimisation;speech synthesis;joint regularized optimization;target approximation model;TAM;compact representation;intonation model;text-to-speech synthesis;unnatural pitch targets;target parameter distributions;speech signals;natural pitch contours;5th-order linear systems;TargetOptimizer;Estimation;Optimization;Tools;Europe;Signal processing;Predictive models;Software},\n  doi = {10.23919/EUSIPCO.2018.8552945},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433339.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel method to estimate the pitch target parameters of the target approximation model (TAM). The TAM allows the compact representation of natural pitch contours on a solid theoretical basis and can be used as an intonation model for text-to-speech synthesis. In contrast to previous approaches, the method proposed here estimates the parameters of all targets jointly, uses 5th-order (instead of 3rd-order) linear systems to model the target approximation process, and uses regularization to avoid unnatural pitch targets. The effect of these features on the modeling error and the target parameter distributions are shown. The proposed method has been made available as the open-source software tool TargetOptimizer.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lenslet Light Field Panorama Creation: A Sub-Aperture Image Stitching Approach.\n \n \n \n \n\n\n \n Oliveira, A.; Brites, C.; Ascenso, J.; and Pereira, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 236-240, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LensletPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552948,\n  author = {A. Oliveira and C. Brites and J. Ascenso and F. Pereira},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Lenslet Light Field Panorama Creation: A Sub-Aperture Image Stitching Approach},\n  year = {2018},\n  pages = {236-240},\n  abstract = {Typically, a single light field camera only captures a limited portion of the visual scene and the rendered views offer low spatial resolution since the full sensor resolution also needs to capture the light arriving to the sensor from multiple directions. However, many applications require a large field of view and higher spatial resolution, which cannot be offered by a single lenslet light field image. In this paper, a lenslet light field panorama creation solution is proposed based on the stitching of the sub-aperture images associated to several lenslet light fields. The proposed approach consists in capturing multiple light fields with a single lenslet camera, which is rotated to capture different scene angles; corresponding sub-aperture images associated with the multiple light field images are then stitched while avoiding misalignments. The created lenslet light field panorama should preserve the directional light information, thus allowing to change a posteriorithe perspective and the objects in focus on a panorama basis.},\n  keywords = {cameras;image capture;image colour analysis;image reconstruction;image resolution;image segmentation;photography;rendering (computer graphics);rendered views;sensor resolution;visual scene;spatial resolution;lenslet light field image;sub-aperture image stitching;lenslet light field panorama creation;Cameras;Visualization;Feature extraction;Apertures;Two dimensional displays;Arrays;lenslet light field;panorama creation;multi-perspective;stiching},\n  doi = {10.23919/EUSIPCO.2018.8552948},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439274.pdf},\n}\n\n
\n
\n\n\n
\n Typically, a single light field camera only captures a limited portion of the visual scene and the rendered views offer low spatial resolution since the full sensor resolution also needs to capture the light arriving to the sensor from multiple directions. However, many applications require a large field of view and higher spatial resolution, which cannot be offered by a single lenslet light field image. In this paper, a lenslet light field panorama creation solution is proposed based on the stitching of the sub-aperture images associated to several lenslet light fields. The proposed approach consists in capturing multiple light fields with a single lenslet camera, which is rotated to capture different scene angles; corresponding sub-aperture images associated with the multiple light field images are then stitched while avoiding misalignments. The created lenslet light field panorama should preserve the directional light information, thus allowing to change a posteriorithe perspective and the objects in focus on a panorama basis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Estimating Faults Modes in Ball Bearing Machinery using a Sparse Reconstruction Framework*.\n \n \n \n\n\n \n Juhlin, M.; Swärd, J.; Pesavento, M.; and Jakobsson, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2330-2334, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552950,\n  author = {M. Juhlin and J. Swärd and M. Pesavento and A. Jakobsson},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimating Faults Modes in Ball Bearing Machinery using a Sparse Reconstruction Framework*},\n  year = {2018},\n  pages = {2330-2334},\n  abstract = {In this work, we present a computationally efficient algorithm for estimating fault modes in ball bearing systems. The presented method generalizes and improves upon earlier developed sparse reconstruction techniques, allowing for detecting multiple fault modes. The measured signal is corrupted with additive and multiplicative noise, yielding a signal that is highly erratic. Fortunately, the damaged ball bearings give rise to strong periodical structures which may be exploited when forming the proposed detector. Numerical simulations illustrate the preferred performance of the proposed method.},\n  keywords = {ball bearings;compressed sensing;fault diagnosis;signal reconstruction;sparse reconstruction techniques;multiple fault modes;ball bearing systems;sparse reconstruction framework;ball bearing machinery;strong periodical structures;damaged ball bearings;multiplicative noise;additive noise;Harmonic analysis;Ball bearings;Frequency modulation;Signal processing algorithms;Europe;Ball bearing systems;sparse reconstruction;convex optimization;ADMM},\n  doi = {10.23919/EUSIPCO.2018.8552950},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this work, we present a computationally efficient algorithm for estimating fault modes in ball bearing systems. The presented method generalizes and improves upon earlier developed sparse reconstruction techniques, allowing for detecting multiple fault modes. The measured signal is corrupted with additive and multiplicative noise, yielding a signal that is highly erratic. Fortunately, the damaged ball bearings give rise to strong periodical structures which may be exploited when forming the proposed detector. Numerical simulations illustrate the preferred performance of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sequential Spatia-temporal Symbol-level Precoding Enabling Faster-than-Nyquist Signaling for Multi-user MISO Systems.\n \n \n \n\n\n \n Spano, D.; Chatzinotas, S.; and Ottersten, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 827-831, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552952,\n  author = {D. Spano and S. Chatzinotas and B. Ottersten},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sequential Spatia-temporal Symbol-level Precoding Enabling Faster-than-Nyquist Signaling for Multi-user MISO Systems},\n  year = {2018},\n  pages = {827-831},\n  abstract = {This paper addresses the problem of the interference between multiple co-channel transmissions in the downlink of a multi-antenna wireless system. In this context, symbol-level precoding achieves a constructive interference effect which results in SINR gains at the receivers side. Usually the constructive interference is exploited in the spatial dimension (multi-user interference), however in this work we consider a spatio-temporal precoding model which allows to exploit the interference also in the temporal dimension (inter-symbol interference). The proposed method, which optimizes the overs ampled transmit waveforms by minimizing the per-antenna transmit power, allows faster-than-Nyquist signaling over multi-user MISO systems without imposing additional complexity at the user terminals. The optimization is performed in a sequential fashion, by splitting the data streams in blocks and handling the inter-block interference. Numerical results are presented to assess the gains of the scheme in terms of effective rate and energy efficiency.},\n  keywords = {antenna arrays;intersymbol interference;MIMO communication;precoding;multiuser MISO systems;multiple co-channel transmissions;multiantenna wireless system;constructive interference effect;spatial dimension;spatio-temporal precoding model;temporal dimension;inter-symbol interference;per-antenna transmit power;faster-than-Nyquist signaling;inter-block interference;sequential spatia-temporal symbol-level precoding;Interference;Precoding;Optimization;Transmitting antennas;MISO communication;Receivers;Minimization},\n  doi = {10.23919/EUSIPCO.2018.8552952},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of the interference between multiple co-channel transmissions in the downlink of a multi-antenna wireless system. In this context, symbol-level precoding achieves a constructive interference effect which results in SINR gains at the receivers side. Usually the constructive interference is exploited in the spatial dimension (multi-user interference), however in this work we consider a spatio-temporal precoding model which allows to exploit the interference also in the temporal dimension (inter-symbol interference). The proposed method, which optimizes the overs ampled transmit waveforms by minimizing the per-antenna transmit power, allows faster-than-Nyquist signaling over multi-user MISO systems without imposing additional complexity at the user terminals. The optimization is performed in a sequential fashion, by splitting the data streams in blocks and handling the inter-block interference. Numerical results are presented to assess the gains of the scheme in terms of effective rate and energy efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modulation-Domain Parametric Multichannel Kalman Filtering for Speech Enhancement.\n \n \n \n \n\n\n \n Xue, W.; Moore, A. H.; Brookes, M.; and Naylor, P. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2509-2513, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Modulation-DomainPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552954,\n  author = {W. Xue and A. H. Moore and M. Brookes and P. A. Naylor},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Modulation-Domain Parametric Multichannel Kalman Filtering for Speech Enhancement},\n  year = {2018},\n  pages = {2509-2513},\n  abstract = {The goal of speech enhancement is to reduce the noise signal while keeping the speech signal undistorted. Recently we developed the multichannel Kalman filtering (MKF) for speech enhancement, in which the temporal evolution of the speech signal and the spatial correlation between multichannel observations are jointly exploited to estimate the clean signal. In this paper, we extend the previous work to derive a parametric MKF (PMKF), which incorporates a controlling factor to achieve the trade-off between the speech distortion and noise reduction. The controlling factor weights between the speech distortion and noise reduction related terms in the cost function of PMKF, and based on the minimum mean squared error (MMSE) criterion, the optimal PMKF gain is derived. We analyse the performance of the proposed PMKF and show the differences with the speech distortion weighted multichannel Wiener filter (SDW-MWF). We conduct experiments in different noisy conditions to evaluate the impact of the controlling factor on the noise reduction performance, and the results demonstrate the effectiveness of the proposed method.},\n  keywords = {Kalman filters;least mean squares methods;mean square error methods;speech enhancement;Wiener filters;factor weights;speech signal;noise reduction performance;speech distortion weighted multichannel Wiener filter;PMKF;parametric MKF;clean signal;multichannel observations;noise signal;speech enhancement;modulation-domain parametric multichannel Kalman filtering;Distortion;Noise reduction;Speech enhancement;Noise measurement;Estimation;Cost function;Speech enhancement;Microphone arrays;Kalman filtering;Modulation domain},\n  doi = {10.23919/EUSIPCO.2018.8552954},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438011.pdf},\n}\n\n
\n
\n\n\n
\n The goal of speech enhancement is to reduce the noise signal while keeping the speech signal undistorted. Recently we developed the multichannel Kalman filtering (MKF) for speech enhancement, in which the temporal evolution of the speech signal and the spatial correlation between multichannel observations are jointly exploited to estimate the clean signal. In this paper, we extend the previous work to derive a parametric MKF (PMKF), which incorporates a controlling factor to achieve the trade-off between the speech distortion and noise reduction. The controlling factor weights between the speech distortion and noise reduction related terms in the cost function of PMKF, and based on the minimum mean squared error (MMSE) criterion, the optimal PMKF gain is derived. We analyse the performance of the proposed PMKF and show the differences with the speech distortion weighted multichannel Wiener filter (SDW-MWF). We conduct experiments in different noisy conditions to evaluate the impact of the controlling factor on the noise reduction performance, and the results demonstrate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combined Sparse Regularization for Nonlinear Adaptive Filters.\n \n \n \n \n\n\n \n Comminiello, D.; Scarpiniti, M.; Scardapane, S.; Azpicueta-Ruiz, L. A.; and Unclni, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 336-340, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CombinedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552955,\n  author = {D. Comminiello and M. Scarpiniti and S. Scardapane and L. A. Azpicueta-Ruiz and A. Unclni},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Combined Sparse Regularization for Nonlinear Adaptive Filters},\n  year = {2018},\n  pages = {336-340},\n  abstract = {Nonlinear adaptive filters often show some sparse behavior due to the fact that not all the coefficients are equally useful for the modeling of any nonlinearity. Recently, a class of proportionate algorithms has been proposed for nonlinear filters to leverage sparsity of their coefficients. However, the choice of the norm penalty of the cost function may be not always appropriate depending on the problem. In this paper, we introduce an adaptive combined scheme based on a block-based approach involving two nonlinear filters with different regularization that allows to achieve always superior performance than individual rules. The proposed method is assessed in nonlinear system identification problems, showing its effectiveness in taking advantage of the online combined regularization.},\n  keywords = {adaptive filters;least mean squares methods;nonlinear filters;norm penalty;cost function;online combined regularization;nonlinear system identification problems;adaptive combined scheme;sparse behavior;nonlinear adaptive filters;sparse regularization;Adaptation models;Adaptive filters;Lips;Europe;Cost function;Indexes;Sparse Regularization;Functional Links;Linear-in-the-Parameters Nonlinear Filters;Sparse Adaptive Filters;Adaptive Combination of Filters},\n  doi = {10.23919/EUSIPCO.2018.8552955},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439439.pdf},\n}\n\n
\n
\n\n\n
\n Nonlinear adaptive filters often show some sparse behavior due to the fact that not all the coefficients are equally useful for the modeling of any nonlinearity. Recently, a class of proportionate algorithms has been proposed for nonlinear filters to leverage sparsity of their coefficients. However, the choice of the norm penalty of the cost function may be not always appropriate depending on the problem. In this paper, we introduce an adaptive combined scheme based on a block-based approach involving two nonlinear filters with different regularization that allows to achieve always superior performance than individual rules. The proposed method is assessed in nonlinear system identification problems, showing its effectiveness in taking advantage of the online combined regularization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low Cost Setup for High Resolution Multiview Panorama Recording and Registration.\n \n \n \n \n\n\n \n Ueberheide, M.; Muehlhausen, M.; and Magnor, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 231-235, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LowPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552956,\n  author = {M. Ueberheide and M. Muehlhausen and M. Magnor},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Low Cost Setup for High Resolution Multiview Panorama Recording and Registration},\n  year = {2018},\n  pages = {231-235},\n  abstract = {Affordable consumer panorama solutions are still limited to low resolution and synchronization for multiple cameras has to be done externally. Professional equipment that provides high resolution recording and synchronization is not widely available yet. In this paper we present a low cost setup to record multiple high resolution panorama videos at the same time. We perform an external synchronization. To allow further computer vision algorithms to process these recordings the videos are adjusted and aligned automatically by extracting exact camera positions.},\n  keywords = {cameras;computer vision;image resolution;video signal processing;high resolution multiview panorama recording;professional equipment;high resolution recording;computer vision algorithms;consumer panorama solutions;high resolution panorama videos;synchronization;cameras;Cameras;Synchronization;Videos;Calibration;Europe;Batteries;panorama capture;alignment;calibration},\n  doi = {10.23919/EUSIPCO.2018.8552956},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436682.pdf},\n}\n\n
\n
\n\n\n
\n Affordable consumer panorama solutions are still limited to low resolution and synchronization for multiple cameras has to be done externally. Professional equipment that provides high resolution recording and synchronization is not widely available yet. In this paper we present a low cost setup to record multiple high resolution panorama videos at the same time. We perform an external synchronization. To allow further computer vision algorithms to process these recordings the videos are adjusted and aligned automatically by extracting exact camera positions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Beam Shape Calibration for Multi-Beam Radio Astronomical Phased Arrays.\n \n \n \n \n\n\n \n Wijnholds, S. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2663-2667, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BeamPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552957,\n  author = {S. J. Wijnholds},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Beam Shape Calibration for Multi-Beam Radio Astronomical Phased Arrays},\n  year = {2018},\n  pages = {2663-2667},\n  abstract = {The ability of phased array systems to form multiple beams simultaneously holds great promise to revolutionise radio astronomy by enlarging the instantaneous sky coverage. Currently, measurements obtained by different beams are largely treated as separate observations. In this paper, a new approach to beam shape calibration, i.e., the calibration of direction dependent instrumental gains, is presented, in which the fact that these observations are done simultaneously is adequately exploited. It follows from imposing a novel spatial constraint on the direction dependent instrumental gain model. It may provide better calibration performance and may even allow beam shape calibration in scenarios in which the observations with the individual beams provide insufficient information to do so when they are treated as separate observations.},\n  keywords = {calibration;radioastronomical techniques;radioastronomy;beam shape calibration;phased array systems;multiple beams;direction dependent instrumental gain model;calibration performance;multibeam radio astronomical phased arrays;Calibration;Shape;Instruments;Apertures;Phased arrays;Compounds;Arrays},\n  doi = {10.23919/EUSIPCO.2018.8552957},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436687.pdf},\n}\n\n
\n
\n\n\n
\n The ability of phased array systems to form multiple beams simultaneously holds great promise to revolutionise radio astronomy by enlarging the instantaneous sky coverage. Currently, measurements obtained by different beams are largely treated as separate observations. In this paper, a new approach to beam shape calibration, i.e., the calibration of direction dependent instrumental gains, is presented, in which the fact that these observations are done simultaneously is adequately exploited. It follows from imposing a novel spatial constraint on the direction dependent instrumental gain model. It may provide better calibration performance and may even allow beam shape calibration in scenarios in which the observations with the individual beams provide insufficient information to do so when they are treated as separate observations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Active Learning for One-Class Classification Using Two One-Class Classifiers.\n \n \n \n \n\n\n \n Schlachter, P.; and Yang, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1197-1201, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ActivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552958,\n  author = {P. Schlachter and B. Yang},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Active Learning for One-Class Classification Using Two One-Class Classifiers},\n  year = {2018},\n  pages = {1197-1201},\n  abstract = {This paper introduces a novel, generic active learning method for one-class classification. Active learning methods play an important role to reduce the efforts of manual labeling in the field of machine learning. Although many active learning approaches have been proposed during the last years, most of them are restricted on binary or multi-class problems. One-class classifiers use samples from only one class, the so-called target class, during training and hence require special active learning strategies. The few strategies proposed for one-class classification either suffer from their limitation on specific one-class classifiers or their performance depends on particular assumptions about datasets like imbalance. Our proposed method bases on using two one-class classifiers, one for the desired target class and one for the so-called outlier class. It allows to invent new query strategies, to use binary query strategies and to define simple stopping criteria. Based on the new method, two query strategies are proposed. The provided experiments compare the proposed approach with known strategies on various datasets and show improved results in almost all situations.},\n  keywords = {learning (artificial intelligence);pattern classification;query processing;outlier class;query strategies;one-class classification;machine learning;two one-class classifiers;generic active learning;Training;Uncertainty;Learning systems;Labeling;Signal processing;Space exploration;Europe},\n  doi = {10.23919/EUSIPCO.2018.8552958},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438937.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces a novel, generic active learning method for one-class classification. Active learning methods play an important role to reduce the efforts of manual labeling in the field of machine learning. Although many active learning approaches have been proposed during the last years, most of them are restricted on binary or multi-class problems. One-class classifiers use samples from only one class, the so-called target class, during training and hence require special active learning strategies. The few strategies proposed for one-class classification either suffer from their limitation on specific one-class classifiers or their performance depends on particular assumptions about datasets like imbalance. Our proposed method bases on using two one-class classifiers, one for the desired target class and one for the so-called outlier class. It allows to invent new query strategies, to use binary query strategies and to define simple stopping criteria. Based on the new method, two query strategies are proposed. The provided experiments compare the proposed approach with known strategies on various datasets and show improved results in almost all situations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dynamic Allocation of Processing Resources in Cloud-RAN for a Virtualised 5G Mobile Network.\n \n \n \n \n\n\n \n Zhang, Y.; Barusso, F.; Collins, D.; Ruffini, M.; and DaSilva, L. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 782-786, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DynamicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552959,\n  author = {Y. Zhang and F. Barusso and D. Collins and M. Ruffini and L. A. DaSilva},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Dynamic Allocation of Processing Resources in Cloud-RAN for a Virtualised 5G Mobile Network},\n  year = {2018},\n  pages = {782-786},\n  abstract = {One of the main research directions for 5G mobile networks is resource virtualisation and slicing. Towards this goal, the Cloud Radio Access Network (C-RAN) architecture offers mobile operators a flexible and dynamic framework for managing resources and processing data. This paper proposes a dynamic allocation approach for processing resources in a C- RAN supported by the concept of Network Function Virtualisation (NFV). To achieve this objective, we virtualised the Baseband Unit (BBU) resources for Long Term Evolution (LTE) mobile network into a BBU pool supported by Linux Container (LXC) technology. We report on experiments conducted in the Iris testbed with high-definition video streaming by implementing Software-Defined Radio (SDR)-based LTE functionality with the virtualised BBU pool. Our results show a significant improvement in the quality of the video transmission with this dynamic allocation approach.},\n  keywords = {cloud computing;Linux;Long Term Evolution;radio access networks;software radio;video streaming;virtualisation;virtualised BBU pool;Software-Defined Radio-based LTE functionality;Linux Container technology;Long Term Evolution mobile network;Baseband Unit resources;Network Function Virtualisation;dynamic allocation approach;flexible framework;mobile operators;C-RAN;Cloud Radio Access Network architecture;slicing;resource virtualisation;main research directions;virtualised 5G mobile Network;Cloud-RAN;processing resources;Resource management;Containers;Dynamic scheduling;Long Term Evolution;Streaming media;Virtual machining;Indexes;C-RAN;NFV;container;5G;testbed},\n  doi = {10.23919/EUSIPCO.2018.8552959},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435631.pdf},\n}\n\n
\n
\n\n\n
\n One of the main research directions for 5G mobile networks is resource virtualisation and slicing. Towards this goal, the Cloud Radio Access Network (C-RAN) architecture offers mobile operators a flexible and dynamic framework for managing resources and processing data. This paper proposes a dynamic allocation approach for processing resources in a C- RAN supported by the concept of Network Function Virtualisation (NFV). To achieve this objective, we virtualised the Baseband Unit (BBU) resources for Long Term Evolution (LTE) mobile network into a BBU pool supported by Linux Container (LXC) technology. We report on experiments conducted in the Iris testbed with high-definition video streaming by implementing Software-Defined Radio (SDR)-based LTE functionality with the virtualised BBU pool. Our results show a significant improvement in the quality of the video transmission with this dynamic allocation approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect of Training and Test Datasets on Image Restoration and Super-Resolution by Deep Learning.\n \n \n \n \n\n\n \n Kirmemis, O.; and Tekalp, A. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 514-518, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EffectPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552961,\n  author = {O. Kirmemis and A. M. Tekalp},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Effect of Training and Test Datasets on Image Restoration and Super-Resolution by Deep Learning},\n  year = {2018},\n  pages = {514-518},\n  abstract = {Many papers have recently been published on image restoration and single-image super-resolution (SISR) using different deep neural network architectures, training methodology, and datasets. The standard approach for performance evaluation in these papers is to provide a single “average” mean-square error (MSE) and/or structural similarity index (SSIM) value over a test dataset. Since deep learning is data-driven, performance of the proposed methods depends on the size of the training and test sets as well as the variety and complexity of images in them. Furthermore, the performance varies across different images within the same test set. Hence, comparison of different architectures and training methods using a single average performance measure is difficult, especially when they are not using the same training and test sets. We propose new measures to characterize the variety and complexity of images in the training and test sets, and show that our proposed dataset complexity measures correlate well with the mean PSNR and SSIM values obtained on different test data sets. Hence, better characterization of performance of different methods is possible if the mean and variance of the MSE or SSIM over the test set as well as the size, resolution and complexity measures of the training and test sets are specified.},\n  keywords = {image denoising;image resolution;image restoration;learning (artificial intelligence);mean square error methods;neural net architecture;training methods;image restoration;deep learning;single-image super-resolution;performance evaluation;test datasets;deep neural network architectures;SISR;mean-square error;MSE;structural similarity index;SSIM;PSNR;Training;Complexity theory;Image restoration;Image resolution;Frequency measurement;Signal resolution;Image restoration;super-resolution;convolutional nets;deep learning;complexity of training and test datasets},\n  doi = {10.23919/EUSIPCO.2018.8552961},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437651.pdf},\n}\n\n
\n
\n\n\n
\n Many papers have recently been published on image restoration and single-image super-resolution (SISR) using different deep neural network architectures, training methodology, and datasets. The standard approach for performance evaluation in these papers is to provide a single “average” mean-square error (MSE) and/or structural similarity index (SSIM) value over a test dataset. Since deep learning is data-driven, performance of the proposed methods depends on the size of the training and test sets as well as the variety and complexity of images in them. Furthermore, the performance varies across different images within the same test set. Hence, comparison of different architectures and training methods using a single average performance measure is difficult, especially when they are not using the same training and test sets. We propose new measures to characterize the variety and complexity of images in the training and test sets, and show that our proposed dataset complexity measures correlate well with the mean PSNR and SSIM values obtained on different test data sets. Hence, better characterization of performance of different methods is possible if the mean and variance of the MSE or SSIM over the test set as well as the size, resolution and complexity measures of the training and test sets are specified.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Light - fields of Circular Camera Arrays.\n \n \n \n\n\n \n Cserkaszky, A.; Kara, P. A.; Barsi, A.; Martini, M. G.; and Balogh, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 241-245, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8552998,\n  author = {A. Cserkaszky and P. A. Kara and A. Barsi and M. G. Martini and T. Balogh},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Light - fields of Circular Camera Arrays},\n  year = {2018},\n  pages = {241-245},\n  abstract = {The ray structure and sampling properties of different light-field representations inherently determine their use-cases. Currently prevalent linear data structures do not allow for joint processing of light-fields captured from multiple sides of a scene. In this paper, we review and highlight the differences in capturing and reconstruction between light-fields captured with linear and circular camera arrays. We also examine and improve the processing of light-fields captured with circular camera arrays with a focus on their use in reconstructing dense light-fields, by proposing a new resampling technique for circular light-fields. The proposed circular epipolar light-field structure creates a simple sinusoidal relation between the objects of the scene and their curves in the epipolar image, opening the way of efficient reconstruction of circular light-fields.},\n  keywords = {cameras;data structures;image capture;image reconstruction;image representation;image sampling;rendering (computer graphics);linear data structures;linear camera arrays;dense light-fields reconstruction;resampling technique;epipolar image;circular epipolar light-field structure;light-field representations;sampling properties;ray structure;circular camera arrays;Cameras;Arrays;Image reconstruction;Array signal processing;Europe;Two dimensional displays;light-field;capture configuration;camera array},\n  doi = {10.23919/EUSIPCO.2018.8552998},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The ray structure and sampling properties of different light-field representations inherently determine their use-cases. Currently prevalent linear data structures do not allow for joint processing of light-fields captured from multiple sides of a scene. In this paper, we review and highlight the differences in capturing and reconstruction between light-fields captured with linear and circular camera arrays. We also examine and improve the processing of light-fields captured with circular camera arrays with a focus on their use in reconstructing dense light-fields, by proposing a new resampling technique for circular light-fields. The proposed circular epipolar light-field structure creates a simple sinusoidal relation between the objects of the scene and their curves in the epipolar image, opening the way of efficient reconstruction of circular light-fields.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Characterization of Mental States through Node Connectivity between Brain Signals.\n \n \n \n \n\n\n \n Cattai, T.; Colonnese, S.; Corsi, M.; Bassett, D. S.; Scarano, G.; and De Vico Fallani, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1377-1381, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CharacterizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553000,\n  author = {T. Cattai and S. Colonnese and M. Corsi and D. S. Bassett and G. Scarano and F. {De Vico Fallani}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Characterization of Mental States through Node Connectivity between Brain Signals},\n  year = {2018},\n  pages = {1377-1381},\n  abstract = {Discriminating mental states from brain signals is crucial for many applications in cognitive and clinical neuroscience. Most of the studies relied on the feature extraction from the activity of single brain areas, thus neglecting the potential contribution of their functional coupling, or connectivity. Here, we consider spectral coherence and imaginary coherence to infer brain connectivity networks from electroencephalographic (EEG) signals recorded during motor imagery and resting states in a group of healthy subjects. By using a graph theoretic approach, we then extract the weighted node degree from each network and evaluate its ability to discriminate the two mental states as a function of the number of available observations. The obtained results show that the features extracted from spectral coherence networks outperform those obtained from imaginary coherence in terms of significant difference, neurophysiological interpretation and reliability with fewer observations. Taken together, these findings suggest that graph algebraic descriptors of brain connectivity networks can be further explored to classify mental states.},\n  keywords = {cognition;electroencephalography;feature extraction;graph theory;medical signal processing;neurophysiology;signal classification;mental states;node connectivity;brain signals;feature extraction;single brain areas;imaginary coherence;brain connectivity networks;electroencephalographic signals;motor imagery;resting states;weighted node degree;spectral coherence networks;graph theoretic approach;neurophysiological interpretation;graph algebraic descriptors;Coherence;Electroencephalography;Electrodes;Silicon;Signal processing;Europe;EEG;Spectral Coherence;Imaginary Coherence;Weighted Node degree;Motor imagery},\n  doi = {10.23919/EUSIPCO.2018.8553000},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437746.pdf},\n}\n\n
\n
\n\n\n
\n Discriminating mental states from brain signals is crucial for many applications in cognitive and clinical neuroscience. Most of the studies relied on the feature extraction from the activity of single brain areas, thus neglecting the potential contribution of their functional coupling, or connectivity. Here, we consider spectral coherence and imaginary coherence to infer brain connectivity networks from electroencephalographic (EEG) signals recorded during motor imagery and resting states in a group of healthy subjects. By using a graph theoretic approach, we then extract the weighted node degree from each network and evaluate its ability to discriminate the two mental states as a function of the number of available observations. The obtained results show that the features extracted from spectral coherence networks outperform those obtained from imaginary coherence in terms of significant difference, neurophysiological interpretation and reliability with fewer observations. Taken together, these findings suggest that graph algebraic descriptors of brain connectivity networks can be further explored to classify mental states.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient ML-Estimator for Blind Reverberation Time Estimation.\n \n \n \n \n\n\n \n Löllmann, H. W.; Brendel, A.; and Kellermann, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2195-2199, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553001,\n  author = {H. W. Löllmann and A. Brendel and W. Kellermann},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient ML-Estimator for Blind Reverberation Time Estimation},\n  year = {2018},\n  pages = {2195-2199},\n  abstract = {A new maximum likelihood (ML) estimator for the blind estimation of the reverberation time (RT) is derived. In contrast to previously proposed ML-based reverberation time estimators, the RT estimate is obtained by a simple closed-form expression, which leads to significant computational savings. Moreover, it is shown that the new estimator is unbiased and reaches the Crámer-Rao lower bound. The proposed RT estimator achieves a similar estimation accuracy but involves a significantly lower computational complexity compared to an ML-based RT estimator that scored among the best at the ACE Challenge.},\n  keywords = {maximum likelihood estimation;reverberation;efficient ML-estimator;blind reverberation time estimation;maximum likelihood estimator;ML-based reverberation time estimators;simple closed-form expression;RT estimator;computational savings;estimation accuracy;Cramer-Rao lower bound;Maximum likelihood estimation;Signal processing;Reverberation;Europe;Closed-form solutions;Indexes},\n  doi = {10.23919/EUSIPCO.2018.8553001},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437986.pdf},\n}\n\n
\n
\n\n\n
\n A new maximum likelihood (ML) estimator for the blind estimation of the reverberation time (RT) is derived. In contrast to previously proposed ML-based reverberation time estimators, the RT estimate is obtained by a simple closed-form expression, which leads to significant computational savings. Moreover, it is shown that the new estimator is unbiased and reaches the Crámer-Rao lower bound. The proposed RT estimator achieves a similar estimation accuracy but involves a significantly lower computational complexity compared to an ML-based RT estimator that scored among the best at the ACE Challenge.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Performance Evaluation of N O- Reference Image Quality Metrics for Visible Wavelength Iris Biometric Images.\n \n \n \n\n\n \n Liu, X.; Charrier, C.; Pedersen, M.; and Bours, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1437-1441, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553002,\n  author = {X. Liu and C. Charrier and M. Pedersen and P. Bours},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance Evaluation of N O- Reference Image Quality Metrics for Visible Wavelength Iris Biometric Images},\n  year = {2018},\n  pages = {1437-1441},\n  abstract = {Image quality assessment plays an important role in iris recognition systems because the system performance is affected by low quality iris images. With the development of electronic color imaging, there are more and more researches about visible wavelength (VW) iris recognition. Compared to the near infrared iris images, using VW iris images acquired under unconstrained imaging conditions is a more challenging task for the iris recognition system. However, the number of quality assessment methods for VW iris images is limited. Therefore, it is interested to investigate whether existing no-reference image quality metrics (IQMs) which are designed for natural images can assess the quality of VW iris images. In this paper, we evaluate the performance of 15 selected no-reference IQMs on VW iris biometrics. The experimental results show that several IQMs can assess iris sample quality according to the system performance.},\n  keywords = {biometrics (access control);feature extraction;image colour analysis;iris recognition;performance evaluation;visible wavelength iris biometric images;image quality assessment;iris recognition system;system performance;low quality;electronic color imaging;VW;infrared iris images;unconstrained imaging conditions;quality assessment methods;no-reference image quality metrics;natural images;iris sample quality;Iris recognition;Image quality;Biomedical imaging;Databases;Measurement;Distortion;Quality assessment;biometric;image quality assessment;visible wavelength iris;performance evaluation;image based attributes;multi-modality},\n  doi = {10.23919/EUSIPCO.2018.8553002},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Image quality assessment plays an important role in iris recognition systems because the system performance is affected by low quality iris images. With the development of electronic color imaging, there are more and more researches about visible wavelength (VW) iris recognition. Compared to the near infrared iris images, using VW iris images acquired under unconstrained imaging conditions is a more challenging task for the iris recognition system. However, the number of quality assessment methods for VW iris images is limited. Therefore, it is interested to investigate whether existing no-reference image quality metrics (IQMs) which are designed for natural images can assess the quality of VW iris images. In this paper, we evaluate the performance of 15 selected no-reference IQMs on VW iris biometrics. The experimental results show that several IQMs can assess iris sample quality according to the system performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving EEG Source Localization Through Spatio-Temporal Sparse Bayesian Learning.\n \n \n \n \n\n\n \n Hashemi, A.; and Haufe, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1935-1939, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553004,\n  author = {A. Hashemi and S. Haufe},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving EEG Source Localization Through Spatio-Temporal Sparse Bayesian Learning},\n  year = {2018},\n  pages = {1935-1939},\n  abstract = {Sparse Bayesian Learning (SBL) approaches to the EEG inverse problem such as Champagne have been shown to outperform traditional ℓ1-norm based methods in terms of reconstructing sparse source configurations. Current approaches are however sensitive to strong noise contributions and assume independent samples, whereas neurophysiological time series are strongly auto-correlated. Here we present extensions, backed by compressive sensing theory, to the Champagne algorithm that improve the reconstruction performance in low-SNR settings as well as in the presence of correlated measurements. Our numerical simulations using a realistic EEG forward model confirm the efficacy of our approaches.},\n  keywords = {Bayes methods;electroencephalography;inverse problems;learning (artificial intelligence);medical signal processing;neurophysiology;signal reconstruction;spatiotemporal phenomena;time series;EEG source localization;spatio-temporal sparse Bayesian Learning;SBL;EEG inverse problem;sparse source configurations;current approaches;strong noise contributions;independent samples;neurophysiological time series;compressive sensing theory;Champagne algorithm;reconstruction performance;low-SNR settings;correlated measurements;numerical simulations;realistic EEG forward model;Electroencephalography;Signal processing algorithms;Brain;Bayes methods;Lead;Upper bound;Inverse problems},\n  doi = {10.23919/EUSIPCO.2018.8553004},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436981.pdf},\n}\n\n
\n
\n\n\n
\n Sparse Bayesian Learning (SBL) approaches to the EEG inverse problem such as Champagne have been shown to outperform traditional ℓ1-norm based methods in terms of reconstructing sparse source configurations. Current approaches are however sensitive to strong noise contributions and assume independent samples, whereas neurophysiological time series are strongly auto-correlated. Here we present extensions, backed by compressive sensing theory, to the Champagne algorithm that improve the reconstruction performance in low-SNR settings as well as in the presence of correlated measurements. Our numerical simulations using a realistic EEG forward model confirm the efficacy of our approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Texture Classification Using Fractal Dimension Improved by Local Binary Patterns.\n \n \n \n \n\n\n \n Backes, A. R.; and de Mesquita Sá , J. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1312-1316, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TexturePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553005,\n  author = {A. R. Backes and J. J. {de Mesquita Sá}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Texture Classification Using Fractal Dimension Improved by Local Binary Patterns},\n  year = {2018},\n  pages = {1312-1316},\n  abstract = {This paper presents a texture analysis method that combines Bouligand-Minkowski fractal dimension and local binary patterns (LBP) method. The LBP approach is used to obtain “pattern images” from an original input image in order to provide new information sources to be exploited by the Bouligand-Minkowski fractal dimension. Two hybrid approaches were proposed and their results are: “FD(Original image + LBP maps)” (97.12% and 63.80%) and “FD(Original image + LBP maps + STD)” (98.20% and 70.80%) for Brodatz and UIUC image databases, respectively. These results demonstrate that the proposed hybrid method provides a high discriminative feature vector for texture classification.},\n  keywords = {fractals;image classification;image texture;vectors;visual databases;LBP maps + STD;fractal dimension;UIUC image databases;Brodatz image databases;high discriminative feature vector;original input image;pattern images;LBP approach;local binary patterns method;Bouligand-Minkowski fractal dimension;texture analysis method;texture classification;hybrid method;Fractals;Europe;Signal processing;Histograms;Image databases;Image resolution},\n  doi = {10.23919/EUSIPCO.2018.8553005},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432808.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a texture analysis method that combines Bouligand-Minkowski fractal dimension and local binary patterns (LBP) method. The LBP approach is used to obtain “pattern images” from an original input image in order to provide new information sources to be exploited by the Bouligand-Minkowski fractal dimension. Two hybrid approaches were proposed and their results are: “FD(Original image + LBP maps)” (97.12% and 63.80%) and “FD(Original image + LBP maps + STD)” (98.20% and 70.80%) for Brodatz and UIUC image databases, respectively. These results demonstrate that the proposed hybrid method provides a high discriminative feature vector for texture classification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Manifold Clustering based Band Selection for Hyperspectral Face Recognition.\n \n \n \n \n\n\n \n Bhattacharya, S.; Das, S.; and Routray, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1990-1994, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553006,\n  author = {S. Bhattacharya and S. Das and A. Routray},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph Manifold Clustering based Band Selection for Hyperspectral Face Recognition},\n  year = {2018},\n  pages = {1990-1994},\n  abstract = {Efficient band selection reduces the computational load associated with the processing of hyperspectral data. In this paper we propose a novel graph manifold approach based band selection framework for hyperspectral face recognition. In this work, we extract facial features by local binary pattern, which is a popular facial feature descriptor. In the next stage, we cluster the data points using graph manifold ranking method and select representative bands from each cluster. We also propose a band similarity index (BSI) for quantifying the consistency of band selection algorithms. BSI facilitates faster and efficient matching of HSI faces provided the selected bands capture inter and intra-subject variation. We compare the efficiency of the proposed framework with state of the art methods on two real hyperspectral face datasets.},\n  keywords = {face recognition;feature extraction;graph theory;pattern clustering;data points;graph manifold ranking method;band similarity index;band selection algorithms;HSI faces;hyperspectral face datasets;graph manifold clustering based band selection;hyperspectral face recognition;hyperspectral data;facial features;local binary pattern;Hyperspectral imaging;Face recognition;Face;Manifolds;Indexes;Signal processing algorithms;Hyperspectral Face Recognition;Band Selection;Graph Manifold Clustering;Band Similarity Index;Manifold Ranking},\n  doi = {10.23919/EUSIPCO.2018.8553006},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437938.pdf},\n}\n\n
\n
\n\n\n
\n Efficient band selection reduces the computational load associated with the processing of hyperspectral data. In this paper we propose a novel graph manifold approach based band selection framework for hyperspectral face recognition. In this work, we extract facial features by local binary pattern, which is a popular facial feature descriptor. In the next stage, we cluster the data points using graph manifold ranking method and select representative bands from each cluster. We also propose a band similarity index (BSI) for quantifying the consistency of band selection algorithms. BSI facilitates faster and efficient matching of HSI faces provided the selected bands capture inter and intra-subject variation. We compare the efficiency of the proposed framework with state of the art methods on two real hyperspectral face datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance analysis of the covariance-whitening and the covariance-subtraction methods for estimating the relative transfer function.\n \n \n \n \n\n\n \n Markovich-Golan, S.; Gannot, S.; and Kellermann, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2499-2503, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553007,\n  author = {S. Markovich-Golan and S. Gannot and W. Kellermann},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance analysis of the covariance-whitening and the covariance-subtraction methods for estimating the relative transfer function},\n  year = {2018},\n  pages = {2499-2503},\n  abstract = {Estimation of the relative transfer functions (RTFs) vector of a desired speech source is a fundamental problem in the design of data-dependent spatial filters. We present two common estimation methods, namely the covariance-whitening (CW) and the covariance-subtraction (CS) methods. The CW method has been shown in prior work to outperform the CS method. However, thus far its performance has not been analyzed. In this paper, we analyze the performance of the CW and CS methods and show that in the cases of spatially white noise and of uniform powers of desired speech source and coherent interference over all microphones, the CW method is superior. The derivations are validated by comparing them to their empirical counterparts in Monte Carlo experiments. In fact, the CW method outperforms the CS method in all tested scenarios, although there may be rare scenarios for which this is not the case.},\n  keywords = {Monte Carlo methods;spatial filters;speech enhancement;transfer functions;white noise;Wiener filters;performance analysis;covariance-whitening;covariance-subtraction methods;relative transfer functions vector;data-dependent spatial filters;common estimation methods;CW method;CS method;speech source;Monte Carlo experiments;Covariance matrices;Microphones;Eigenvalues and eigenfunctions;Estimation error;Europe;Signal processing;Transfer functions;spatial filter;beamformer;RTF},\n  doi = {10.23919/EUSIPCO.2018.8553007},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437654.pdf},\n}\n\n
\n
\n\n\n
\n Estimation of the relative transfer functions (RTFs) vector of a desired speech source is a fundamental problem in the design of data-dependent spatial filters. We present two common estimation methods, namely the covariance-whitening (CW) and the covariance-subtraction (CS) methods. The CW method has been shown in prior work to outperform the CS method. However, thus far its performance has not been analyzed. In this paper, we analyze the performance of the CW and CS methods and show that in the cases of spatially white noise and of uniform powers of desired speech source and coherent interference over all microphones, the CW method is superior. The derivations are validated by comparing them to their empirical counterparts in Monte Carlo experiments. In fact, the CW method outperforms the CS method in all tested scenarios, although there may be rare scenarios for which this is not the case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Structured sparsity regularization for gravitational- wave polarization reconstruction.\n \n \n \n\n\n \n Feng, F.; Chassande-Mottin, E.; Bacon, P.; and Fraysse, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1750-1754, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553009,\n  author = {F. Feng and E. Chassande-Mottin and P. Bacon and A. Fraysse},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Structured sparsity regularization for gravitational- wave polarization reconstruction},\n  year = {2018},\n  pages = {1750-1754},\n  abstract = {Gravitational-wave (GW) observations with a network of more than two advanced detectors open the possibility of reconstructing the two polarizations predicted by General Relativity. We propose to address this problem using sparsity promoting regularizations. We consider a variety of techniques, including “structured sparsity” that allows to explicitly model the intrinsic clustering effect occurring in the time-frequency representation of GW transient signals. The proposed methods are evaluated with simulated GW signals and real-world noise. Numerical simulations show the advantages of the proposed approaches.},\n  keywords = {gravitational waves;numerical analysis;Numerical simulations;simulated GW signals;GW transient signals;time-frequency representation;intrinsic clustering effect;sparsity promoting regularizations;General Relativity;polarizations;gravitational-wave observations;gravitational-wave polarization reconstruction;structured sparsity regularization;Detectors;Time-frequency analysis;Signal to noise ratio;Signal processing algorithms;Europe;Transient analysis},\n  doi = {10.23919/EUSIPCO.2018.8553009},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Gravitational-wave (GW) observations with a network of more than two advanced detectors open the possibility of reconstructing the two polarizations predicted by General Relativity. We propose to address this problem using sparsity promoting regularizations. We consider a variety of techniques, including “structured sparsity” that allows to explicitly model the intrinsic clustering effect occurring in the time-frequency representation of GW transient signals. The proposed methods are evaluated with simulated GW signals and real-world noise. Numerical simulations show the advantages of the proposed approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Autonomous Person Detection and Tracking Framework Using Unmanned Aerial Vehicles (UAVs).\n \n \n \n \n\n\n \n Fradi, H.; Bracco, L.; Canino, F.; and Dugelay, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1047-1051, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AutonomousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553010,\n  author = {H. Fradi and L. Bracco and F. Canino and J. Dugelay},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Autonomous Person Detection and Tracking Framework Using Unmanned Aerial Vehicles (UAVs)},\n  year = {2018},\n  pages = {1047-1051},\n  abstract = {While person tracking has made significant progress over the last few years, most of the existing approaches are of limited success in real-time applications using moving sensors. In particular, we emphasize in this paper the need for a visual tracker that enables autonomous navigation functionality to drones (UAVs), mainly to follow a specific target. To achieve this goal, a color-based detection framework is proposed. The approach includes as well the execution of control commands, which are essential to switch from the detection to automatically follow the detected target. Our proposed approach is evaluated on videos recorded by drones. The obtained results demonstrate the effectiveness of the proposed approach to accurately follow a target in real-time, and despite different challenges such as lighting changes, speed, and occlusions.},\n  keywords = {autonomous aerial vehicles;computer vision;image colour analysis;object detection;object tracking;path planning;color-based detection framework;control commands;drones;autonomous person detection;tracking framework;unmanned aerial vehicles;person tracking;real-time applications;visual tracker;autonomous navigation functionality;UAV;Drones;Image color analysis;Color;Target tracking;Real-time systems;Visualization;Cameras;Detection;Tracking;Moving Sensors;Color;Drone Commands;Real-time},\n  doi = {10.23919/EUSIPCO.2018.8553010},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438250.pdf},\n}\n\n
\n
\n\n\n
\n While person tracking has made significant progress over the last few years, most of the existing approaches are of limited success in real-time applications using moving sensors. In particular, we emphasize in this paper the need for a visual tracker that enables autonomous navigation functionality to drones (UAVs), mainly to follow a specific target. To achieve this goal, a color-based detection framework is proposed. The approach includes as well the execution of control commands, which are essential to switch from the detection to automatically follow the detected target. Our proposed approach is evaluated on videos recorded by drones. The obtained results demonstrate the effectiveness of the proposed approach to accurately follow a target in real-time, and despite different challenges such as lighting changes, speed, and occlusions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dictionary Learning for Photometric Redshift Estimation.\n \n \n \n \n\n\n \n Frontera-Pons, J.; Sureau, F.; Moraes, B.; Bobin, J.; Abdalla, F. B.; and Starck, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1740-1744, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DictionaryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553011,\n  author = {J. Frontera-Pons and F. Sureau and B. Moraes and J. Bobin and F. B. Abdalla and J. Starck},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Dictionary Learning for Photometric Redshift Estimation},\n  year = {2018},\n  pages = {1740-1744},\n  abstract = {Photometric redshift estimation and the assessment of the distance to an astronomic object plays a key role in modern cosmology. We present in this article a new method for photometric redshift estimation that relies on sparse linear representations. The proposed algorithm is based on a sparse decomposition for rest-frame spectra in a learned dictionary. Additionally, it provides both an estimate for the redshift together with the full resolution spectra from the observed photometry for a given galaxy. This technique has been evaluated on realistic simulated photometric measurements.},\n  keywords = {astronomical photometry;astronomical spectra;astronomical techniques;cosmology;red shift;photometric redshift estimation;learned dictionary;realistic simulated photometric measurements;Dictionary learning;astronomic object;modern cosmology;sparse linear representations;Estimation;Dictionaries;Signal processing algorithms;Machine learning;Extraterrestrial measurements;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553011},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437898.pdf},\n}\n\n
\n
\n\n\n
\n Photometric redshift estimation and the assessment of the distance to an astronomic object plays a key role in modern cosmology. We present in this article a new method for photometric redshift estimation that relies on sparse linear representations. The proposed algorithm is based on a sparse decomposition for rest-frame spectra in a learned dictionary. Additionally, it provides both an estimate for the redshift together with the full resolution spectra from the observed photometry for a given galaxy. This technique has been evaluated on realistic simulated photometric measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Building a Tensor Framework for the Analysis and Classification of Steady-State Visual Evoked Potentials in Children.\n \n \n \n \n\n\n \n Kinney-Lang, E.; Ebied, A.; and Escudero, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 296-300, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BuildingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553012,\n  author = {E. Kinney-Lang and A. Ebied and J. Escudero},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Building a Tensor Framework for the Analysis and Classification of Steady-State Visual Evoked Potentials in Children},\n  year = {2018},\n  pages = {296-300},\n  abstract = {Steady-state visual evoked potentials (SSVEP) are one of several underlying signals used in various electroencephalography (EEG) based applications, including brain-computer interface (BCI) technology. Through oscillating visual stimulus at distinct frequencies, an SSVEP can be detected by EEG at occipital electrodes on the scalp, with distinct visual stimuli representing distinct choices. Rapid, accurate detection and classification of these signals is crucial for real-time analysis in SSVEP-based applications. However, signal analysis and interpretation of SSVEP events may be hindered in children due to the significant variability in electrophysiological signals throughout development. Recently, multi-way tensors have been shown capable of exploiting higher-order interactions present in the naturally multi-dimensional EEG data. Using tensors as tools to identify latent structures between varying maturational signals thus may provide a potential solution for rapid classification of SSVEP signals in children at different developmental stages. The presented methodology builds upon previous tensor-based SSVEP analysis and extends it for the first time to developing paediatric populations. Results from a binary SSVEP classification task of n = 40 children age 8-11 are reported to be significantly greater than chance, at 67-74% accuracy across multiple training and testing blocks. The findings support that tensor decomposition could provide flexible advantages capable of accommodating developmental differences across children and lay groundwork for future tensor analysis in SSVEP-based applications, like BCIs.},\n  keywords = {brain-computer interfaces;electroencephalography;medical signal detection;medical signal processing;neurophysiology;paediatrics;signal classification;tensors;visual evoked potentials;real-time analysis;tensor analysis;brain-computer interface technology;occipital electrodes;scalp;multidimensional EEG data;developmental stages;tensor-based SSVEP analysis;paediatric populations;oscillating visual stimulus;electroencephalography based applications;steady-state visual evoked potentials;tensor framework;tensor decomposition;binary SSVEP classification task;SSVEP signals;rapid classification;maturational signals;electrophysiological signals;SSVEP events;signal analysis;SSVEP-based applications;accurate detection;Tensile stress;Electroencephalography;Visualization;Task analysis;Testing;Mathematical model;Signal processing;multi-way analysis;tensor analysis;SSVEP;child development;brain-computer interface},\n  doi = {10.23919/EUSIPCO.2018.8553012},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437088.pdf},\n}\n\n
\n
\n\n\n
\n Steady-state visual evoked potentials (SSVEP) are one of several underlying signals used in various electroencephalography (EEG) based applications, including brain-computer interface (BCI) technology. Through oscillating visual stimulus at distinct frequencies, an SSVEP can be detected by EEG at occipital electrodes on the scalp, with distinct visual stimuli representing distinct choices. Rapid, accurate detection and classification of these signals is crucial for real-time analysis in SSVEP-based applications. However, signal analysis and interpretation of SSVEP events may be hindered in children due to the significant variability in electrophysiological signals throughout development. Recently, multi-way tensors have been shown capable of exploiting higher-order interactions present in the naturally multi-dimensional EEG data. Using tensors as tools to identify latent structures between varying maturational signals thus may provide a potential solution for rapid classification of SSVEP signals in children at different developmental stages. The presented methodology builds upon previous tensor-based SSVEP analysis and extends it for the first time to developing paediatric populations. Results from a binary SSVEP classification task of n = 40 children age 8-11 are reported to be significantly greater than chance, at 67-74% accuracy across multiple training and testing blocks. The findings support that tensor decomposition could provide flexible advantages capable of accommodating developmental differences across children and lay groundwork for future tensor analysis in SSVEP-based applications, like BCIs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Independent Low-Rank Tensor Analysis for Audio Source Separation.\n \n \n \n \n\n\n \n Yoshii, K.; Kitamura, K.; Bando, Y.; Nakamura, E.; and Kawahara, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1657-1661, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IndependentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553013,\n  author = {K. Yoshii and K. Kitamura and Y. Bando and E. Nakamura and T. Kawahara},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Independent Low-Rank Tensor Analysis for Audio Source Separation},\n  year = {2018},\n  pages = {1657-1661},\n  abstract = {This paper describes a versatile tensor factorization technique called independent low-rank tensor analysis (ILRTA) and its application to single-channel audio source separation. In general, audio source separation has been conducted in the short-time Fourier transform (STFT) domain under an unrealistic but conventional assumption of the independence of time-frequency (TF) bins. Nonnegative matrix factorization (NMF) is a typical technique of single-channel source separation based on the low-rankness of source spectrograms. In a multichannel setting, independent component analysis (ICA) and its multivariate extension called independent vector analysis (IVA) have often been used for blind source separation based on the independence of source spectrograms. Integrating NMF and IVA, independent low-rank matrix analysis (ILRMA) was recently proposed. To deal with the covariance of TF bins, in this paper we propose ILRTA as a new extension of NMF. Both ILRMA and ILRTA aim to find independent and low-rank sources. A key difference is that while ILRMA estimates demixing filters that decorrelate the channels for multichannel source separation, ILRTA finds optimal transforms that decorrelate the time frames and frequency bins of a STFT representation for single-channel source separation in a way that the bin-wise independence assumed by NMF holds true as much as possible. We report evaluation results of ILRTA and discuss extension of ILRTA to multichannel source separation.},\n  keywords = {audio signal processing;blind source separation;Fourier transforms;independent component analysis;matrix decomposition;source separation;tensors;NMF;low-rank matrix analysis;ILRMA;ILRTA;low-rank sources;multichannel source separation;single-channel source separation;bin-wise independence;low-rank tensor analysis;single-channel audio source separation;time-frequency bins;nonnegative matrix factorization;source spectrograms;independent component analysis;independent vector analysis;blind source separation;tensor factorization technique;short-time Fourier transform;Covariance matrices;Spectrogram;Source separation;Tensile stress;Time-frequency analysis;Transforms;Decorrelation},\n  doi = {10.23919/EUSIPCO.2018.8553013},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436717.pdf},\n}\n\n
\n
\n\n\n
\n This paper describes a versatile tensor factorization technique called independent low-rank tensor analysis (ILRTA) and its application to single-channel audio source separation. In general, audio source separation has been conducted in the short-time Fourier transform (STFT) domain under an unrealistic but conventional assumption of the independence of time-frequency (TF) bins. Nonnegative matrix factorization (NMF) is a typical technique of single-channel source separation based on the low-rankness of source spectrograms. In a multichannel setting, independent component analysis (ICA) and its multivariate extension called independent vector analysis (IVA) have often been used for blind source separation based on the independence of source spectrograms. Integrating NMF and IVA, independent low-rank matrix analysis (ILRMA) was recently proposed. To deal with the covariance of TF bins, in this paper we propose ILRTA as a new extension of NMF. Both ILRMA and ILRTA aim to find independent and low-rank sources. A key difference is that while ILRMA estimates demixing filters that decorrelate the channels for multichannel source separation, ILRTA finds optimal transforms that decorrelate the time frames and frequency bins of a STFT representation for single-channel source separation in a way that the bin-wise independence assumed by NMF holds true as much as possible. We report evaluation results of ILRTA and discuss extension of ILRTA to multichannel source separation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Beamforming and Blind Source Separation Have a Complementary Effect in Reducing Tonic Cranial Muscle Contamination of Scalp Measurements.\n \n \n \n \n\n\n \n Janani, A. S.; Grummett, T. S.; Bakhshayesh, H.; Willoughby, J. O.; and Pope, K. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 86-90, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BeamformingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553014,\n  author = {A. S. Janani and T. S. Grummett and H. Bakhshayesh and J. O. Willoughby and K. J. Pope},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Beamforming and Blind Source Separation Have a Complementary Effect in Reducing Tonic Cranial Muscle Contamination of Scalp Measurements},\n  year = {2018},\n  pages = {86-90},\n  abstract = {Scalp electroencephalograms (EEG) are susceptible to cranial and cervical muscle contamination from frequencies as low as 20 hertz, even in relaxed conditions. Reliably recording cognitive activity, which is in this range, is impossible without removing or reducing the effect of muscle contamination. Our unique database of paralysed conscious subjects enabled us to test the effect of combining beamforming and blind source separation in reducing tonic muscle contamination of scalp electrical recordings. Using the beamforming technique, muscle sources are separated automatically based on their location; while using blind source separation, muscle components are separated based on their spectral gradient. Our results show that applying the beamforming technique on data pruned by a blind source separation technique (or vice versa) can reduce tonic muscle contamination significantly more than applying either of them separately, especially at peripheral locations. Hence, these approaches complement each other in reducing muscle contamination of EEG.},\n  keywords = {array signal processing;blind source separation;cognition;electroencephalography;medical signal processing;muscle;neurophysiology;tonic muscle contamination;scalp electrical recordings;beamforming technique;muscle sources;muscle components;blind source separation technique;complementary effect;tonic cranial muscle contamination;scalp measurements;scalp electroencephalograms;cervical muscle contamination;EEG;peripheral locations;spectral gradient;paralysed conscious subjects;cognitive activity;frequency 20.0 Hz;Muscles;Electroencephalography;Contamination;Array signal processing;Scalp;Electromyography;Task analysis;beamforming;blind source separation;muscle contamination;electroencephalograph;neurophysiological response},\n  doi = {10.23919/EUSIPCO.2018.8553014},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435121.pdf},\n}\n\n
\n
\n\n\n
\n Scalp electroencephalograms (EEG) are susceptible to cranial and cervical muscle contamination from frequencies as low as 20 hertz, even in relaxed conditions. Reliably recording cognitive activity, which is in this range, is impossible without removing or reducing the effect of muscle contamination. Our unique database of paralysed conscious subjects enabled us to test the effect of combining beamforming and blind source separation in reducing tonic muscle contamination of scalp electrical recordings. Using the beamforming technique, muscle sources are separated automatically based on their location; while using blind source separation, muscle components are separated based on their spectral gradient. Our results show that applying the beamforming technique on data pruned by a blind source separation technique (or vice versa) can reduce tonic muscle contamination significantly more than applying either of them separately, especially at peripheral locations. Hence, these approaches complement each other in reducing muscle contamination of EEG.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ear Acoustic Biometrics Using Inaudible Signals and Its Application to Continuous User Authentication.\n \n \n \n \n\n\n \n Mahto, S.; Arakawa, T.; and Koshinak, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1407-1411, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EarPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553015,\n  author = {S. Mahto and T. Arakawa and T. Koshinak},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Ear Acoustic Biometrics Using Inaudible Signals and Its Application to Continuous User Authentication},\n  year = {2018},\n  pages = {1407-1411},\n  abstract = {This paper presents an improved version of a previously-proposed ear-acoustic biometric system for personal authentication. Even though the previous system provided a fast, accurate, and easy means of authentication, it employed noticeably audible probe signals to extract ear acoustic-features, signals which might interrupt user activities. To overcome this problem, this paper presents silent user authentication by employing inaudible signals in the place of audible signals for capturing ear acoustic-features. A comparative study using a number of audible and inaudible signals demonstrates that inaudible signals provide accurate authentication under the condition that the relative position of the earphone device against the ear canal is constant, which is a requirement for continuous user authentication. On the other hand, audible signals offer better accuracy when the earphone position changes, which is often the case in initial user authentication. This suggests the idea of a hybrid system that employs both audible and inaudible signals for, respectively, accurate initial authentication and user-friendly continuous authentication.},\n  keywords = {audio signal processing;authorisation;ear;earphones;feature extraction;ear acoustic biometrics;inaudible signals;continuous user authentication;ear-acoustic biometric system;personal authentication;noticeably audible probe signals;ear acoustic-features;silent user authentication;audible signals;ear canal;initial user authentication;Ear;Authentication;Acoustics;Headphones;Irrigation;Probes;Biometrics (access control)},\n  doi = {10.23919/EUSIPCO.2018.8553015},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437448.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents an improved version of a previously-proposed ear-acoustic biometric system for personal authentication. Even though the previous system provided a fast, accurate, and easy means of authentication, it employed noticeably audible probe signals to extract ear acoustic-features, signals which might interrupt user activities. To overcome this problem, this paper presents silent user authentication by employing inaudible signals in the place of audible signals for capturing ear acoustic-features. A comparative study using a number of audible and inaudible signals demonstrates that inaudible signals provide accurate authentication under the condition that the relative position of the earphone device against the ear canal is constant, which is a requirement for continuous user authentication. On the other hand, audible signals offer better accuracy when the earphone position changes, which is often the case in initial user authentication. This suggests the idea of a hybrid system that employs both audible and inaudible signals for, respectively, accurate initial authentication and user-friendly continuous authentication.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimating the Topology of Neural Networks from Distributed Observations.\n \n \n \n \n\n\n \n Alexandru, R.; Malhotra, P.; Reynolds, S.; and Dragotti, P. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 420-424, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EstimatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553016,\n  author = {R. Alexandru and P. Malhotra and S. Reynolds and P. L. Dragotti},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimating the Topology of Neural Networks from Distributed Observations},\n  year = {2018},\n  pages = {420-424},\n  abstract = {We address the problem of estimating the effective connectivity of the brain network, using the input stimulus model proposed by Izhikevich in [1], which accurately reproduces the behaviour of spiking and bursting biological neurons, whilst ensuring computational simplicity. We first analyse the temporal dynamics of neural networks, showing that the spike propagation within the brain can be modelled as a diffusion process. This helps prove the suitability of NetRate algorithm proposed by Rodriguez in [2] to infer the structure of biological neural networks. Finally, we present simulation results using synthetic data to verify the performance of the topology estimation algorithm.},\n  keywords = {brain;learning (artificial intelligence);neural nets;neurophysiology;probability;topology;distributed observations;bursting biological neurons;diffusion process;NetRate algorithm;topology estimation algorithm;biological neural networks;spike propagation;temporal dynamics;input stimulus model;brain network;Neurons;Biological neural networks;Inference algorithms;Signal processing algorithms;Biological system modeling;Stability analysis;Mathematical model;Neural networks;network topology inference;stability analysis of spike propagation;Izhikevich neuron model;Brian simulator;NetRate algorithm},\n  doi = {10.23919/EUSIPCO.2018.8553016},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436793.pdf},\n}\n\n
\n
\n\n\n
\n We address the problem of estimating the effective connectivity of the brain network, using the input stimulus model proposed by Izhikevich in [1], which accurately reproduces the behaviour of spiking and bursting biological neurons, whilst ensuring computational simplicity. We first analyse the temporal dynamics of neural networks, showing that the spike propagation within the brain can be modelled as a diffusion process. This helps prove the suitability of NetRate algorithm proposed by Rodriguez in [2] to infer the structure of biological neural networks. Finally, we present simulation results using synthetic data to verify the performance of the topology estimation algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Fast Algorithms for the Schur- Type Nonlinear Parametrization of Higher-Order Stochastic Processes.\n \n \n \n\n\n \n Wielgus, A.; and Zarzycki, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 326-330, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553017,\n  author = {A. Wielgus and J. Zarzycki},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Algorithms for the Schur- Type Nonlinear Parametrization of Higher-Order Stochastic Processes},\n  year = {2018},\n  pages = {326-330},\n  abstract = {We propose a class of fast algorithms, efficiently performing nonlinear Schur parametrization of higher-order and non-Gaussian stochastic processes, following from consideration of (weak) higher-order stationarity of the underlying signals and resulting in essential nonlinear complexity reduction, allowing for their practical implementations.},\n  keywords = {Gaussian processes;higher order statistics;signal processing;Schur-type nonlinear parametrization;nonlinear complexity reduction;higher-order nonGaussian stochastic processes;signal higher-order stationarity;Signal processing algorithms;Stochastic processes;Complexity theory;Covariance matrices;Signal processing;Europe;Estimation;Nonlinear Schur parametrization;nonstationary and stationary higher-order stochastic processes},\n  doi = {10.23919/EUSIPCO.2018.8553017},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We propose a class of fast algorithms, efficiently performing nonlinear Schur parametrization of higher-order and non-Gaussian stochastic processes, following from consideration of (weak) higher-order stationarity of the underlying signals and resulting in essential nonlinear complexity reduction, allowing for their practical implementations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards Robust Evaluation of Face Morphing Detection.\n \n \n \n \n\n\n \n Spreeuwers, L.; Schils, M.; and Veldhuis, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1027-1031, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553018,\n  author = {L. Spreeuwers and M. Schils and R. Veldhuis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Towards Robust Evaluation of Face Morphing Detection},\n  year = {2018},\n  pages = {1027-1031},\n  abstract = {Automated face recognition is increasingly used as a reliable means to establish the identity of persons for various purposes, ranging from automated passport checks at the border to transferring money and unlocking mobile phones. Face morphing is a technique to blend facial images of two or more subjects such that the result resembles both subjects. Face morphing attacks pose a serious risk for any face recognition system. Without automated morphing detection, state of the art face recognition systems are extremely vulnerable to morphing attacks. Morphing detection methods published in literature often only work for a few types of morphs or on a single dataset with morphed photographs. We create face morphing databases with varying characteristics and how for a LBP/SVM based morphing detection method that performs on par with the state of the art (around 2% EER), the performance collapses with an EER as high as if it is tested across databases with different characteristics. In addition we show that simple image manipulations like adding noise or rescaling can be used to obscure morphing artifacts and deteriorate the morphing detection performance.},\n  keywords = {face recognition;feature extraction;support vector machines;automated morphing detection;morphing detection methods;morphed photographs;face morphing databases;morphing detection performance;face morphing detection;automated face recognition;automated passport checks;unlocking mobile phones;facial images;face morphing attacks;face recognition system;Face;Databases;Face recognition;Training;Testing;Support vector machines;Feature extraction},\n  doi = {10.23919/EUSIPCO.2018.8553018},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439319.pdf},\n}\n\n
\n
\n\n\n
\n Automated face recognition is increasingly used as a reliable means to establish the identity of persons for various purposes, ranging from automated passport checks at the border to transferring money and unlocking mobile phones. Face morphing is a technique to blend facial images of two or more subjects such that the result resembles both subjects. Face morphing attacks pose a serious risk for any face recognition system. Without automated morphing detection, state of the art face recognition systems are extremely vulnerable to morphing attacks. Morphing detection methods published in literature often only work for a few types of morphs or on a single dataset with morphed photographs. We create face morphing databases with varying characteristics and how for a LBP/SVM based morphing detection method that performs on par with the state of the art (around 2% EER), the performance collapses with an EER as high as if it is tested across databases with different characteristics. In addition we show that simple image manipulations like adding noise or rescaling can be used to obscure morphing artifacts and deteriorate the morphing detection performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Separation of Anthropogenic Noise and Extremely Low Frequency Natural Magnetic Field Using Statistical Features.\n \n \n \n \n\n\n \n Rodríguez-Camacho, J.; Blanco-Navarro, D.; Gómez-Lepera, J. F.; Fornieles-Callejón, J.; and Carrión, M. C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2405-2409, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SeparationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553019,\n  author = {J. Rodríguez-Camacho and D. Blanco-Navarro and J. F. Gómez-Lepera and J. Fornieles-Callejón and M. C. Carrión},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Separation of Anthropogenic Noise and Extremely Low Frequency Natural Magnetic Field Using Statistical Features},\n  year = {2018},\n  pages = {2405-2409},\n  abstract = {This contribution is aimed at separating the anthropogenic noise from the natural magnetic field in the measurements recorded in a Extremely Low Frequency band station located in Sierra Nevada, Spain, to experimentally study Schu-mann resonances. First, we use a scheme based on Independent Component Analysis that provides unsatisfactory results. In order to achieve a better separation, we develop a new method by exploiting the information that statistical moments give us, under the assumption that the statistical distributions of the anthropogenic noise and of the natural magnetic field are different. This method consists of finding the rotation of the two original directions of the magnetometers (North-South and East-West) that maximizes or, equivalently, minimizes, the value of a certain statistical parameter. Our purpose is that this rotation is equal to that rotation which makes the anthropogenic noise completely disappear from one of the outputs, which we will show that always exists.},\n  keywords = {independent component analysis;magnetic noise;noise;seismology;statistical distributions;Sierra Nevada;Spain;Extremely Low Frequency band station;statistical features;Extremely Low Frequency natural magnetic field;statistical parameter;anthropogenic noise;statistical distributions;statistical moments;Independent Component Analysis;Schu-mann resonances;Magnetometers;Magnetic recording;Magnetic separation;Frequency measurement;Magnetic resonance;Signal processing;Anthropogenic noise;Independent Component Analysis;Schumann resonances},\n  doi = {10.23919/EUSIPCO.2018.8553019},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437453.pdf},\n}\n\n
\n
\n\n\n
\n This contribution is aimed at separating the anthropogenic noise from the natural magnetic field in the measurements recorded in a Extremely Low Frequency band station located in Sierra Nevada, Spain, to experimentally study Schu-mann resonances. First, we use a scheme based on Independent Component Analysis that provides unsatisfactory results. In order to achieve a better separation, we develop a new method by exploiting the information that statistical moments give us, under the assumption that the statistical distributions of the anthropogenic noise and of the natural magnetic field are different. This method consists of finding the rotation of the two original directions of the magnetometers (North-South and East-West) that maximizes or, equivalently, minimizes, the value of a certain statistical parameter. Our purpose is that this rotation is equal to that rotation which makes the anthropogenic noise completely disappear from one of the outputs, which we will show that always exists.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decentralized Sensor Localization by Decision Fusion of RSSI and Mobility in Indoor Environments.\n \n \n \n \n\n\n \n Alshamaa, D.; Mourad-Chehade, F.; and Honeine, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2300-2304, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DecentralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553020,\n  author = {D. Alshamaa and F. Mourad-Chehade and P. Honeine},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Decentralized Sensor Localization by Decision Fusion of RSSI and Mobility in Indoor Environments},\n  year = {2018},\n  pages = {2300-2304},\n  abstract = {Localization of sensors has become an essential issue in wireless networks. This paper presents a decentralized approach to localize sensors in indoor environments. The targeted area is partitioned into several sectors, each of which having a local calculator capable of emitting, receiving, and processing data. Each calculator runs a local localization algorithm, by investigating the belief functions theory for decision fusion of radio fingerprints, to estimate the sensors zones. The fusion of all calculators estimates, is combined with a mobility model to yield a final zone decision. The decentralized algorithm is described and evaluated against the state-of-the-art. Experimental results show the effectiveness of the proposed method in terms of localization accuracy, processing time, and robustness.},\n  keywords = {indoor radio;mobile radio;RSSI;sensor placement;wireless sensor networks;decentralized sensor localization;decision fusion;indoor environments;wireless networks;decentralized approach;local localization algorithm;sensors zones;mobility model;final zone decision;decentralized algorithm;localization accuracy;Calculators;Topology;Network topology;Signal processing algorithms;Wireless sensor networks;Wireless fidelity;Signal processing;Decentralized architecture;decision fusion;localization;mobility;RSSI fingerprints},\n  doi = {10.23919/EUSIPCO.2018.8553020},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439115.pdf},\n}\n\n
\n
\n\n\n
\n Localization of sensors has become an essential issue in wireless networks. This paper presents a decentralized approach to localize sensors in indoor environments. The targeted area is partitioned into several sectors, each of which having a local calculator capable of emitting, receiving, and processing data. Each calculator runs a local localization algorithm, by investigating the belief functions theory for decision fusion of radio fingerprints, to estimate the sensors zones. The fusion of all calculators estimates, is combined with a mobility model to yield a final zone decision. The decentralized algorithm is described and evaluated against the state-of-the-art. Experimental results show the effectiveness of the proposed method in terms of localization accuracy, processing time, and robustness.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n New Doppler Processing for the Detection of Small and Slowly-Moving Targets in Highly Ambiguous Radar Context.\n \n \n \n \n\n\n \n Aouchiche, L.; Ferro-Famil, L.; and Adnet, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1207-1211, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NewPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553021,\n  author = {L. Aouchiche and L. Ferro-Famil and C. Adnet},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {New Doppler Processing for the Detection of Small and Slowly-Moving Targets in Highly Ambiguous Radar Context},\n  year = {2018},\n  pages = {1207-1211},\n  abstract = {This paper presents a novel Doppler processing scheme for pulse Doppler radars operating in at intermediate Pulse Repetition Frequency (PRF) and suffering from range and Doppler ambiguities. One of the main drawbacks of the classical Doppler processing approach concerns the suppression of small and slowly-moving targets when rejecting ground clutter returns. In order to address this problem a two-step Doppler method is proposed in this paper. The first step uses a new iterative algorithm that resolves ambiguities and detects fast (exo-clutter) targets. The detection of slow (endo-clutter) targets is then performed by an adaptive detection scheme that uses a new covariance matrix estimation technique. Pulse trains with different characteristics are then associated for enhanced detection performance.},\n  keywords = {covariance matrices;Doppler radar;iterative methods;radar clutter;radar detection;radar signal processing;Doppler ambiguities;two-step Doppler method;exo-clutter;endo-clutter;adaptive detection scheme;Doppler processing scheme;ground clutter;pulse trains;pulse Doppler radar;intermediate pulse repetition frequency;ambiguous radar context;slowly-moving target detection;small target detection;iterative algorithm;covariance matrix estimation;Doppler effect;Clutter;Covariance matrices;Training data;Signal resolution;Estimation;Doppler radar},\n  doi = {10.23919/EUSIPCO.2018.8553021},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435792.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel Doppler processing scheme for pulse Doppler radars operating in at intermediate Pulse Repetition Frequency (PRF) and suffering from range and Doppler ambiguities. One of the main drawbacks of the classical Doppler processing approach concerns the suppression of small and slowly-moving targets when rejecting ground clutter returns. In order to address this problem a two-step Doppler method is proposed in this paper. The first step uses a new iterative algorithm that resolves ambiguities and detects fast (exo-clutter) targets. The detection of slow (endo-clutter) targets is then performed by an adaptive detection scheme that uses a new covariance matrix estimation technique. Pulse trains with different characteristics are then associated for enhanced detection performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Data-Selective Conjugate Gradient Algorithm.\n \n \n \n \n\n\n \n Diniz, P. S. R.; Mendonca, M. O. K.; Ferreira, J. O.; and Ferreira, T. N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 707-711, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Data-SelectivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553023,\n  author = {P. S. R. Diniz and M. O. K. Mendonca and J. O. Ferreira and T. N. Ferreira},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Data-Selective Conjugate Gradient Algorithm},\n  year = {2018},\n  pages = {707-711},\n  abstract = {The conjugate gradient (CG) adaptive filtering algorithm is an alternative to the more widely used Recursive Least Squares (RLS) and Least Mean Square (LMS) algorithms, where the former requires more computations, and the latter leads to slower convergence. In recent years, some adaptive filtering algorithms have been equipped with data selection mechanism to classify if the data currently available consists of an outlier or if it brings about enough innovation. In both cases the data could be discarded avoiding extra computation and performance degradation. This paper proposes a data selection strategy to the CG algorithm and verifies its effectiveness in simulations utilizing synthetic and real data.},\n  keywords = {adaptive filters;conjugate gradient methods;least mean squares methods;data-selective conjugate gradient algorithm;Recursive Least Squares;Least Mean Square algorithms;adaptive filtering algorithms;data selection mechanism;Mathematical model;Signal processing algorithms;Classification algorithms;Convergence;Data models;Prediction algorithms;Steady-state},\n  doi = {10.23919/EUSIPCO.2018.8553023},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435906.pdf},\n}\n\n
\n
\n\n\n
\n The conjugate gradient (CG) adaptive filtering algorithm is an alternative to the more widely used Recursive Least Squares (RLS) and Least Mean Square (LMS) algorithms, where the former requires more computations, and the latter leads to slower convergence. In recent years, some adaptive filtering algorithms have been equipped with data selection mechanism to classify if the data currently available consists of an outlier or if it brings about enough innovation. In both cases the data could be discarded avoiding extra computation and performance degradation. This paper proposes a data selection strategy to the CG algorithm and verifies its effectiveness in simulations utilizing synthetic and real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind Channel Direction Separation Against Pilot Spoofing Attack in Massive MIMO System.\n \n \n \n \n\n\n \n Cao, R.; Wong, T. F.; Gao, H.; Wang, D.; and Lu, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2559-2563, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553024,\n  author = {R. Cao and T. F. Wong and H. Gao and D. Wang and Y. Lu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Blind Channel Direction Separation Against Pilot Spoofing Attack in Massive MIMO System},\n  year = {2018},\n  pages = {2559-2563},\n  abstract = {This paper considers a pilot spoofing attack scenario in a massive MIMO system. A malicious user tries to disturb the channel estimation process by sending interference symbols to the base-station (BS) via the uplink. Another legitimate user counters by sending random symbols. The BS does not possess any partial channel state information (CSI) and distribution of symbols sent by malicious user a priori. For such scenario, this paper aims to separate the channel directions from the legitimate and malicious users to the BS, respectively. A blind channel separation algorithm based on estimating the characteristic function of the distribution of the signal space vector is proposed. Simulation results show that the proposed algorithm provides good channel separation performance in a typical massive MIMO system.},\n  keywords = {blind source separation;channel estimation;MIMO communication;radiofrequency interference;telecommunication security;vectors;massive MIMO system;CSI;signal space vector;partial channel state information;base-station;interference symbols;channel estimation process;malicious user;pilot spoofing attack scenario;blind channel direction separation;Channel estimation;Signal processing;MIMO communication;Uplink;Signal processing algorithms;Antennas;Europe;Massive MIMO;pilot spoofing attack;blind channel separation},\n  doi = {10.23919/EUSIPCO.2018.8553024},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436932.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers a pilot spoofing attack scenario in a massive MIMO system. A malicious user tries to disturb the channel estimation process by sending interference symbols to the base-station (BS) via the uplink. Another legitimate user counters by sending random symbols. The BS does not possess any partial channel state information (CSI) and distribution of symbols sent by malicious user a priori. For such scenario, this paper aims to separate the channel directions from the legitimate and malicious users to the BS, respectively. A blind channel separation algorithm based on estimating the characteristic function of the distribution of the signal space vector is proposed. Simulation results show that the proposed algorithm provides good channel separation performance in a typical massive MIMO system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automated Detection of Solar Cell Defects with Deep Learning.\n \n \n \n \n\n\n \n Bartler, A.; Mauch, L.; Yang, B.; Reuter, M.; and Stoicescu, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2035-2039, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AutomatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553025,\n  author = {A. Bartler and L. Mauch and B. Yang and M. Reuter and L. Stoicescu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Automated Detection of Solar Cell Defects with Deep Learning},\n  year = {2018},\n  pages = {2035-2039},\n  abstract = {Nowadays, renewable energies play an important role to cover the increasing power demand in accordance with environment protection. Solar energy, produced by large solar farms, is a fast growing technology offering environmental friendly power supply. However, its efficiency suffers from solar cell defects occurring during the operation life or caused by environmental incidents. These defects can be made visible using electroluminescence (EL) imaging. A manual classification of these EL images is very time and cost demanding and prone to subjective inter-examiner variations. For a fully automated defect detection, we introduce a deep learning based classification pipeline operating on the EL images. This includes image preprocessing for distortion correction, segmentation and perspective correction as well as a deep convolutional neural network for solar defect classification with special emphasis on dealing with highly imbalanced dataset. The impact of minority oversampling and data augmentation on the system accuracy is investigated. The performance of our proposed classification pipeline is demonstrated by applying it to a real world dataset.},\n  keywords = {convolution;feature extraction;feedforward neural nets;image classification;image sampling;image segmentation;learning (artificial intelligence);neural nets;optical distortion;power engineering computing;solar cells;deep learning based classification pipeline;image preprocessing;deep convolutional neural network;solar defect classification;solar cell defects;renewable energies;environment protection;solar energy;solar farms;environmental friendly power supply;automated defect detection;power demand;electroluminescence imaging;EL image classification;distortion correction;segmentation;perspective correction;minority oversampling;data augmentation;Computer architecture;Microprocessors;Training;Photovoltaic cells;Pipelines;Transforms;Signal processing;solar cell classification;imbalanced data;deep learning;renewable energies;electroluminescence},\n  doi = {10.23919/EUSIPCO.2018.8553025},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436674.pdf},\n}\n\n
\n
\n\n\n
\n Nowadays, renewable energies play an important role to cover the increasing power demand in accordance with environment protection. Solar energy, produced by large solar farms, is a fast growing technology offering environmental friendly power supply. However, its efficiency suffers from solar cell defects occurring during the operation life or caused by environmental incidents. These defects can be made visible using electroluminescence (EL) imaging. A manual classification of these EL images is very time and cost demanding and prone to subjective inter-examiner variations. For a fully automated defect detection, we introduce a deep learning based classification pipeline operating on the EL images. This includes image preprocessing for distortion correction, segmentation and perspective correction as well as a deep convolutional neural network for solar defect classification with special emphasis on dealing with highly imbalanced dataset. The impact of minority oversampling and data augmentation on the system accuracy is investigated. The performance of our proposed classification pipeline is demonstrated by applying it to a real world dataset.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Semi-supervised Learning for Dynamic Modeling of Brain Signals During Visual and Auditory Tests.\n \n \n \n \n\n\n \n Safont, G.; Salazar, A.; Belda, J.; and Vergara, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1192-1196, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Semi-supervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553028,\n  author = {G. Safont and A. Salazar and J. Belda and L. Vergara},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Semi-supervised Learning for Dynamic Modeling of Brain Signals During Visual and Auditory Tests},\n  year = {2018},\n  pages = {1192-1196},\n  abstract = {Requirements of costly data labeling for data classification are relaxed with semi-supervised learning. This is particularly useful considering monitoring of a physiological process that continuously produces data and can be observed for a long time. We propose a new expectation-maximization (EM) procedure that implements semi-supervised learning and it is based on sequential independent component analysis modeling (SICAMM), that we have called EM-SICAMM. This procedure is applied for dynamic modeling of EEG signals measured from epileptic patients during visual and auditory neuropsychological tests. Those tests are done to evaluate the learning and memory cognitive function of the patients. Classification results demonstrate that EM-SICAMM outperforms, in terms of balanced error rate (BER) and kappa index, the following competitive methods: ICAMM, SICAMM, Gaussian mixture model (GMM), and hidden Markov model (HMM).},\n  keywords = {cognition;electroencephalography;Gaussian processes;hearing;hidden Markov models;independent component analysis;learning (artificial intelligence);medical disorders;medical signal processing;mixture models;neurophysiology;signal classification;vision;semisupervised learning;dynamic modeling;brain signals;data classification;expectation-maximization procedure;sequential independent component analysis modeling;EM-SICAMM;visual tests;auditory neuropsychological tests;memory cognitive function;physiological process;EEG signals;epileptic patients;balanced error rate;kappa index;Gaussian mixture model;hidden Markov model;Hidden Markov models;Brain modeling;Electroencephalography;Semisupervised learning;Visualization;Independent component analysis;Signal processing;semi-supervised learning;dynamic modeling;ICA;EEG;neuropsychological tests},\n  doi = {10.23919/EUSIPCO.2018.8553028},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437820.pdf},\n}\n\n
\n
\n\n\n
\n Requirements of costly data labeling for data classification are relaxed with semi-supervised learning. This is particularly useful considering monitoring of a physiological process that continuously produces data and can be observed for a long time. We propose a new expectation-maximization (EM) procedure that implements semi-supervised learning and it is based on sequential independent component analysis modeling (SICAMM), that we have called EM-SICAMM. This procedure is applied for dynamic modeling of EEG signals measured from epileptic patients during visual and auditory neuropsychological tests. Those tests are done to evaluate the learning and memory cognitive function of the patients. Classification results demonstrate that EM-SICAMM outperforms, in terms of balanced error rate (BER) and kappa index, the following competitive methods: ICAMM, SICAMM, Gaussian mixture model (GMM), and hidden Markov model (HMM).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The role of cloud-computing in the development and application of ADAS.\n \n \n \n \n\n\n \n Olariu, C.; Ortega, J. D.; and Yebes, J. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1037-1041, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553029,\n  author = {C. Olariu and J. D. Ortega and J. J. Yebes},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {The role of cloud-computing in the development and application of ADAS},\n  year = {2018},\n  pages = {1037-1041},\n  abstract = {This work elaborates on the cycles involved in developing ADAS applications which involve resources not permanently available in the vehicles. For example, data is collected from LIDAR, video cameras, precise localization, and user interaction with the ADAS features. These data are consumed by machine learning algorithms hosted locally or in the cloud. This paper investigates the requirements involved in processing camera streams on the fly in the vehicle and the possibility of off-loading the processing load onto the cloud in order to reduce the cost of the in-vehicle hardware. We highlight some representative computer vision applications and assess numerically in what network conditions offload to cloud is feasible.},\n  keywords = {cloud computing;computer vision;driver information systems;learning (artificial intelligence);in-vehicle hardware;representative computer vision applications;cloud-computing;ADAS applications;LIDAR;video cameras;user interaction;ADAS features;machine learning algorithms;camera streams;Computer vision;Graphics processing units;Cameras;Vehicles;Task analysis;Three-dimensional displays;ADAS;Cloud-Computing;Cloud Offload;Computer Vision;GPU processing;Image processing;Deep Learning;Driver Assistance;Autonomous vehicles},\n  doi = {10.23919/EUSIPCO.2018.8553029},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437382.pdf},\n}\n\n
\n
\n\n\n
\n This work elaborates on the cycles involved in developing ADAS applications which involve resources not permanently available in the vehicles. For example, data is collected from LIDAR, video cameras, precise localization, and user interaction with the ADAS features. These data are consumed by machine learning algorithms hosted locally or in the cloud. This paper investigates the requirements involved in processing camera streams on the fly in the vehicle and the possibility of off-loading the processing load onto the cloud in order to reduce the cost of the in-vehicle hardware. We highlight some representative computer vision applications and assess numerically in what network conditions offload to cloud is feasible.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Channel Estimation Based on the Discrete Cosine Transform Type-III Even.\n \n \n \n \n\n\n \n Domínguez-Jiménez, M. E.; Luengo, D.; and Sansigre-Vidal, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1297-1301, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ChannelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553030,\n  author = {M. E. Domínguez-Jiménez and D. Luengo and G. Sansigre-Vidal},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Channel Estimation Based on the Discrete Cosine Transform Type-III Even},\n  year = {2018},\n  pages = {1297-1301},\n  abstract = {In this work, we address the problem of channel estimation in multicarrier communications. We present a procedure which employs the Type-III even DCT (DCT3e) at both the transmitter and the receiver. By using any symmetric training symbol we show how to estimate the channel's impulse response without a prior knowledge of its exact length. Theoretical results are provided in order to guarantee the validity of the proposed technique, whereas simulations illustrate the good behavior of the proposed estimation algorithm.},\n  keywords = {channel estimation;discrete cosine transforms;OFDM modulation;transient response;symmetric training symbol;DCT3e;Type-III even DCT;multicarrier communications;discrete cosine transform Type-III;channel estimation;Channel estimation;Estimation;Mirrors;Training;Discrete cosine transforms;Receivers;MCM;Channel estimation;DCT},\n  doi = {10.23919/EUSIPCO.2018.8553030},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437942.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we address the problem of channel estimation in multicarrier communications. We present a procedure which employs the Type-III even DCT (DCT3e) at both the transmitter and the receiver. By using any symmetric training symbol we show how to estimate the channel's impulse response without a prior knowledge of its exact length. Theoretical results are provided in order to guarantee the validity of the proposed technique, whereas simulations illustrate the good behavior of the proposed estimation algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Interference-based Clustering for MIMO D2D Underlay Communications.\n \n \n \n \n\n\n \n Pischella, M.; Ozbek, B.; and Ruyet, D. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 817-821, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Interference-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553033,\n  author = {M. Pischella and B. Ozbek and D. L. Ruyet},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Interference-based Clustering for MIMO D2D Underlay Communications},\n  year = {2018},\n  pages = {817-821},\n  abstract = {Clustering of Device-to-Device (D2D) pairs with cellular transmissions is particularly challenging to manage interference in future fifth generation networks. D2D pairs should coexist with cellular users in underlay scenario, taking advantage of frequency and spatial dimensions. We consider a Multiple Input Multiple Output (MIMO) channel where all users (whether cellular or devices) are equipped with N > 1 antennas, and the Base Station (BS) has M ≥ N antennas. Interference between D2D pairs, between D2D transmitters and the BS and between cellular users and D2D receivers is then managed by determining clusters of D2D pairs and cellular users with very low relative interference levels. Clusters are obtained after graph-coloring on a pairwise interference-leakage based matrix. Then, several Resource Blocks (RB) allocation algorithms are proposed, with various fairness levels. A final orthogonalization step using Minimum Mean Square Error (MMSE) may be added at the BS in order to further reduce interference. Simulation results show very large D2D data rates improvements, while cellular data rates degradation due to interference can be controlled.},\n  keywords = {antenna arrays;cellular radio;graph colouring;interference suppression;least mean squares methods;matrix algebra;MIMO communication;mobility management (mobile radio);pattern clustering;radio receivers;radio transmitters;radiofrequency interference;resource allocation;wireless channels;fifth generation networks;multiple input multiple output channel;base station;device-to-device clustering;D2D data rate improvements;cellular data rate degradation;interference management;MIMO channel;D2D transmitters;D2D receiver;graph-coloring;resource block allocation algorithms;RB allocation algorithms;minimum mean square error;MMSE;pairwise interference-leakage based matrix;low relative interference levels;BS;cellular devices;cellular transmissions;MIMO D2D underlay communications;interference-based clustering;Device-to-device communication;Interference;Transmitters;MIMO communication;Receivers;Covariance matrices;Resource management},\n  doi = {10.23919/EUSIPCO.2018.8553033},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432235.pdf},\n}\n\n
\n
\n\n\n
\n Clustering of Device-to-Device (D2D) pairs with cellular transmissions is particularly challenging to manage interference in future fifth generation networks. D2D pairs should coexist with cellular users in underlay scenario, taking advantage of frequency and spatial dimensions. We consider a Multiple Input Multiple Output (MIMO) channel where all users (whether cellular or devices) are equipped with N > 1 antennas, and the Base Station (BS) has M ≥ N antennas. Interference between D2D pairs, between D2D transmitters and the BS and between cellular users and D2D receivers is then managed by determining clusters of D2D pairs and cellular users with very low relative interference levels. Clusters are obtained after graph-coloring on a pairwise interference-leakage based matrix. Then, several Resource Blocks (RB) allocation algorithms are proposed, with various fairness levels. A final orthogonalization step using Minimum Mean Square Error (MMSE) may be added at the BS in order to further reduce interference. Simulation results show very large D2D data rates improvements, while cellular data rates degradation due to interference can be controlled.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification Asymptotics in the Random Matrix Regime.\n \n \n \n \n\n\n \n Couillet, R.; Liao, Z.; and Mai, X.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1875-1879, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553034,\n  author = {R. Couillet and Z. Liao and X. Mai},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification Asymptotics in the Random Matrix Regime},\n  year = {2018},\n  pages = {1875-1879},\n  abstract = {This article discusses the asymptotic performance of classical machine learning classification methods (from discriminant analysis to neural networks) for simultaneously large and numerous Gaussian mixture modelled data. We first provide theoretical bounds on the minimally discriminable class means and covariances under an oracle setting, which are then compared to recent theoretical findings on the performance of machine learning. Non-obvious phenomena are discussed, among which surprising phase transitions in the optimal performance rates for specific hyperparameter settings.},\n  keywords = {Gaussian processes;learning (artificial intelligence);mixture models;pattern classification;classification asymptotics;random matrix regime;classification methods;discriminant analysis;neural networks;minimally discriminable class means;covariances;oracle setting;machine learning classification methods;Gaussian mixture modelled data;Machine learning;Covariance matrices;Kernel;Support vector machines;Europe;Signal processing;Neural networks;Random matrix theory;classification;kernel methods;neural networks;LDA/QDA},\n  doi = {10.23919/EUSIPCO.2018.8553034},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435395.pdf},\n}\n\n
\n
\n\n\n
\n This article discusses the asymptotic performance of classical machine learning classification methods (from discriminant analysis to neural networks) for simultaneously large and numerous Gaussian mixture modelled data. We first provide theoretical bounds on the minimally discriminable class means and covariances under an oracle setting, which are then compared to recent theoretical findings on the performance of machine learning. Non-obvious phenomena are discussed, among which surprising phase transitions in the optimal performance rates for specific hyperparameter settings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Audio Virtualization of Facade Acoustic Insulation by Convex Optimization.\n \n \n \n\n\n \n Lapini, A.; Bartalucci, C.; Borchi, F.; Argenti, F.; and Carfagni, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2185-2189, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553035,\n  author = {A. Lapini and C. Bartalucci and F. Borchi and F. Argenti and M. Carfagni},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Audio Virtualization of Facade Acoustic Insulation by Convex Optimization},\n  year = {2018},\n  pages = {2185-2189},\n  abstract = {In this study, modeling of facade acoustic insulation is addressed. The objective is predicting the acoustic behaviour of a virtual facade on an incoming audio signal based on measurements made on an actual facade and on Standardized Level Differences (D ri T) curves known for both of them. In this way, a fast and concise characterization of acoustic insulation performance from outside noise can be achieved. The problem is cast as an inverse one, in which the acoustic impulse response of the actual facade must be substituted by the virtual one, taking into account, however, the constraints that are derived from the DnT analysis. Experimental results are shown to demostrate the effectiveness of the proposed procedure.},\n  keywords = {acoustic signal processing;architectural acoustics;audio signal processing;convex programming;noise abatement;transient response;facade acoustic insulation;convex optimization;acoustic behaviour;virtual facade;incoming audio signal;Standardized Level Differences;acoustic insulation performance;acoustic impulse response;audio virtualization;DnT analysis;Acoustics;Convex functions;Insulation;Buildings;Europe;Signal processing;Industrial engineering;Audio virtualization;Acoustic insulation;Standardized Level Differences;Convex optimization},\n  doi = {10.23919/EUSIPCO.2018.8553035},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this study, modeling of facade acoustic insulation is addressed. The objective is predicting the acoustic behaviour of a virtual facade on an incoming audio signal based on measurements made on an actual facade and on Standardized Level Differences (D ri T) curves known for both of them. In this way, a fast and concise characterization of acoustic insulation performance from outside noise can be achieved. The problem is cast as an inverse one, in which the acoustic impulse response of the actual facade must be substituted by the virtual one, taking into account, however, the constraints that are derived from the DnT analysis. Experimental results are shown to demostrate the effectiveness of the proposed procedure.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n End-to-End Real-Time ROI-Based Encryption in HEVC Videos.\n \n \n \n \n\n\n \n Taha, M. A.; Sidaty, N.; Hamidouche, W.; Dforges, O.; Vanne, J.; and Viitanen, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 171-175, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"End-to-EndPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553038,\n  author = {M. A. Taha and N. Sidaty and W. Hamidouche and O. Dforges and J. Vanne and M. Viitanen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {End-to-End Real-Time ROI-Based Encryption in HEVC Videos},\n  year = {2018},\n  pages = {171-175},\n  abstract = {In this paper, we present an end-to-end real-time encryption of Region of Interest (ROI) in HEVC videos. The proposed ROI encryption makes use of the independent tile concept of HEVC that splits the video frame into separable rectangular areas. Tiles are used to extract the ROI from the background and only the tiles forming the ROI are encrypted. The selective encryption is performed for a set of HEVC syntax elements in a format compliant with the HEVC standard. Thus, the bit-stream can be decoded with a standard HEVC decoder where a secret key is only needed for ROI decryption. In Inter coding, tiles independency is guaranteed by restricting the motion vectors to use only unencrypted tiles in the reference frames. The proposed solution is validated by integrating the encryption into the open-source Kvazaar HEVC encoder and the decryption into the open-source openHEVC decoder, respectively. The results show that this solution performs secure encryption of ROI in real time and with diminutive bitrate and complexity overheads.},\n  keywords = {computational complexity;cryptography;decoding;rate distortion theory;video coding;complexity overheads;Region of Interest encryption;end-to-end real-time ROI-based encryption;secure encryption;open-source openHEVC decoder;open-source Kvazaar HEVC encoder;tiles independency;ROI decryption;standard HEVC decoder;HEVC standard;HEVC syntax elements;selective encryption;video frame;ROI encryption;end-to-end real-time encryption;HEVC videos;Encryption;Videos;Encoding;Generators;Decoding;Syntactics;User identity management;High Efficiency Video Coding (HEVC);tiles;Region of Interest (ROI);selective encryption;quality assessments},\n  doi = {10.23919/EUSIPCO.2018.8553038},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570431554.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present an end-to-end real-time encryption of Region of Interest (ROI) in HEVC videos. The proposed ROI encryption makes use of the independent tile concept of HEVC that splits the video frame into separable rectangular areas. Tiles are used to extract the ROI from the background and only the tiles forming the ROI are encrypted. The selective encryption is performed for a set of HEVC syntax elements in a format compliant with the HEVC standard. Thus, the bit-stream can be decoded with a standard HEVC decoder where a secret key is only needed for ROI decryption. In Inter coding, tiles independency is guaranteed by restricting the motion vectors to use only unencrypted tiles in the reference frames. The proposed solution is validated by integrating the encryption into the open-source Kvazaar HEVC encoder and the decryption into the open-source openHEVC decoder, respectively. The results show that this solution performs secure encryption of ROI in real time and with diminutive bitrate and complexity overheads.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online Parametric NMF for Speech Enhancement.\n \n \n \n \n\n\n \n Kavalekalam, M. S.; Nielsen, J. K.; Shi, L.; Christensen, M. G.; and Boldt, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2320-2324, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553039,\n  author = {M. S. Kavalekalam and J. K. Nielsen and L. Shi and M. G. Christensen and J. Boldt},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Online Parametric NMF for Speech Enhancement},\n  year = {2018},\n  pages = {2320-2324},\n  abstract = {In this paper, we propose a speech enhancement method based on non-negative matrix factorization (NMF) techniques. NMF techniques allow us to approximate the power spectral density (PSD) of the noisy signal as a weighted linear combination of trained speech and noise basis vectors arranged as the columns of a matrix. In this work, we propose to use basis vectors that are parameterised by autoregressive (AR) coefficients. Parametric representation of the spectral basis is beneficial as it can encompass the signal characteristics like, e.g. the speech production model. It is observed that the parametric representation of basis vectors is beneficial while performing online speech enhancement in low delay scenarios.},\n  keywords = {autoregressive processes;matrix decomposition;speech enhancement;vectors;online parametric NMF;speech enhancement method;nonnegative matrix factorization techniques;NMF techniques;power spectral density;noisy signal;weighted linear combination;trained speech;noise basis vectors;autoregressive coefficients;parametric representation;spectral basis;signal characteristics;speech production model;online speech enhancement;AR;Noise measurement;Speech enhancement;Speech coding;Hidden Markov models;Training;Auditory system;Signal processing algorithms;autoregressive modelling;speech enhancement;NMF},\n  doi = {10.23919/EUSIPCO.2018.8553039},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437250.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a speech enhancement method based on non-negative matrix factorization (NMF) techniques. NMF techniques allow us to approximate the power spectral density (PSD) of the noisy signal as a weighted linear combination of trained speech and noise basis vectors arranged as the columns of a matrix. In this work, we propose to use basis vectors that are parameterised by autoregressive (AR) coefficients. Parametric representation of the spectral basis is beneficial as it can encompass the signal characteristics like, e.g. the speech production model. It is observed that the parametric representation of basis vectors is beneficial while performing online speech enhancement in low delay scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automated Tire Footprint Segmentation.\n \n \n \n \n\n\n \n Nava, R.; Fehr, D.; Petry, F.; and Tamisier, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 380-384, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AutomatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553041,\n  author = {R. Nava and D. Fehr and F. Petry and T. Tamisier},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Automated Tire Footprint Segmentation},\n  year = {2018},\n  pages = {380-384},\n  abstract = {Quantitative image-based analysis is a relatively new way to address challenges in automotive tribology. Its inclusion in tire-ground interaction research may provide innovative ideas for improvements in tire design and manufacturing processes. In this article we present a novel and robust technique for segmenting the area of contact between the tire and the ground. The segmentation is performed in an unsupervised fashion with Graph cuts. Then, superpixel adjacency is used to improve the boundaries. Finally, a rolling circle filter is applied to the segmentation to generate a mask that covers the area of contact. The procedure is carried out on a sequence of images captured in an automatic test machine. The estimated shape and total area of contact are built by averaging all the masks that have computed throughout the sequence. Since a ground-truth is not available, we also propose a comparative method to assess the performance of our proposal.},\n  keywords = {design engineering;graph theory;image filtering;image segmentation;manufacturing processes;mechanical engineering computing;road vehicles;rolling;tribology;tyres;unsupervised learning;automated tire footprint segmentation;quantitative image-based analysis;automotive tribology;tire design;manufacturing processes;unsupervised fashion;Graph cuts;rolling circle filter;automatic test machine;tire-ground interaction;shape estimation;Tires;Image segmentation;Shape;Lighting;Proposals;Euclidean distance;Europe;Automotive tribology;Graph cuts;Ray feature error;Rolling circle filter;Superpixels;Tire footprint;Unsupervised segmentation},\n  doi = {10.23919/EUSIPCO.2018.8553041},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438310.pdf},\n}\n\n
\n
\n\n\n
\n Quantitative image-based analysis is a relatively new way to address challenges in automotive tribology. Its inclusion in tire-ground interaction research may provide innovative ideas for improvements in tire design and manufacturing processes. In this article we present a novel and robust technique for segmenting the area of contact between the tire and the ground. The segmentation is performed in an unsupervised fashion with Graph cuts. Then, superpixel adjacency is used to improve the boundaries. Finally, a rolling circle filter is applied to the segmentation to generate a mask that covers the area of contact. The procedure is carried out on a sequence of images captured in an automatic test machine. The estimated shape and total area of contact are built by averaging all the masks that have computed throughout the sequence. Since a ground-truth is not available, we also propose a comparative method to assess the performance of our proposal.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Enhanced Interleaving Frame Loss Concealment Method for Voice Over IP Network Services.\n \n \n \n \n\n\n \n Gueham, T.; and Merazka, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1302-1306, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553042,\n  author = {T. Gueham and F. Merazka},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Enhanced Interleaving Frame Loss Concealment Method for Voice Over IP Network Services},\n  year = {2018},\n  pages = {1302-1306},\n  abstract = {This paper focuses on AMR WB G.722.2 speech codec, and discusses the unused bandwidth resources of the senders by using a Word16(16 bit) to encode the sent frames. A packet loss concealment (PLC) method for G.722.2 speech codec is proposed in order to overcome this problem and increases the efficiency of this codec by improving the quality of decoded speech under burst frame loss conditions over frame-switched networks. Objective and subjective experimental results confirm that our proposed algorithm could achieve better speech quality. Our proposed method achieves a PESQ value higher than 2 at 20% frame erasure rate and ensure the compatibility between our modified decoder and the non-modified G.722.2 coder.},\n  keywords = {codecs;decoding;Internet telephony;IP networks;speech codecs;speech coding;voice communication;enhanced interleaving frame loss concealment method;IP network services;AMR WB G.722.2 speech codec;unused bandwidth resources;Word16(16 bit);packet loss concealment method;decoded speech;burst frame loss conditions;frame-switched networks;objective results;subjective experimental results;speech quality;frame erasure rate;word length 16.0 bit;Codecs;Speech coding;Forward error correction;Signal processing algorithms;Receivers;Loss measurement;packet loss concealment;VoIP;AMR WB G.722.2;PESQ;EMBSD;MUSHRA},\n  doi = {10.23919/EUSIPCO.2018.8553042},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439427.pdf},\n}\n\n
\n
\n\n\n
\n This paper focuses on AMR WB G.722.2 speech codec, and discusses the unused bandwidth resources of the senders by using a Word16(16 bit) to encode the sent frames. A packet loss concealment (PLC) method for G.722.2 speech codec is proposed in order to overcome this problem and increases the efficiency of this codec by improving the quality of decoded speech under burst frame loss conditions over frame-switched networks. Objective and subjective experimental results confirm that our proposed algorithm could achieve better speech quality. Our proposed method achieves a PESQ value higher than 2 at 20% frame erasure rate and ensure the compatibility between our modified decoder and the non-modified G.722.2 coder.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Partial Response Signaling for MIMO Equalization on Multi-Gbit/s Electrical Interconnects.\n \n \n \n \n\n\n \n Jacobs, L.; Bailleul, J.; Manfredi, P.; Guenach, M.; Ginste, D. V.; and Moeneclaey, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 917-921, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553044,\n  author = {L. Jacobs and J. Bailleul and P. Manfredi and M. Guenach and D. V. Ginste and M. Moeneclaey},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On Partial Response Signaling for MIMO Equalization on Multi-Gbit/s Electrical Interconnects},\n  year = {2018},\n  pages = {917-921},\n  abstract = {Because of its ability to deal with intersymbol interference (ISI) and crosstalk (XT) over mutually coupled electrical interconnects, multiple-input multiple-output (MIMO) decision feedback equalization (DFE) has proven to be a promising low-cost solution for achieving multi-Gbit/s wireline communication on- and off-chip. However, not only does the channel become very sensitive to manufacturing tolerances at very high symbol rates, the latency in the feedback loop becomes prohibitively large as well. Whereas the former issue has been addressed by adopting a stochastic MIMO approach where (part of) the equalization filters depend on the channel statistics rather than on the actual channel, we tackle in this paper the latency issue by setting to zero the first N taps of the feedback filters. Moreover, we show that precoded partial response (PR) signaling can improve the performance of the resulting scheme, although the achieved gain is smaller than in the case of single-input single-output (SISO) equalization.},\n  keywords = {decision feedback equalisers;intersymbol interference;MIMO communication;precoding;feedback filters;precoded partial response signaling;single-input single-output equalization;MIMO equalization;intersymbol interference;mutually coupled electrical interconnects;multiple-input multiple-output decision feedback equalization;SISO equalization;channel statistics;equalization filters;stochastic MIMO approach;wireline communication;MIMO communication;Decision feedback equalizers;Conductors;Europe;Manufacturing;Precoding},\n  doi = {10.23919/EUSIPCO.2018.8553044},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439033.pdf},\n}\n\n
\n
\n\n\n
\n Because of its ability to deal with intersymbol interference (ISI) and crosstalk (XT) over mutually coupled electrical interconnects, multiple-input multiple-output (MIMO) decision feedback equalization (DFE) has proven to be a promising low-cost solution for achieving multi-Gbit/s wireline communication on- and off-chip. However, not only does the channel become very sensitive to manufacturing tolerances at very high symbol rates, the latency in the feedback loop becomes prohibitively large as well. Whereas the former issue has been addressed by adopting a stochastic MIMO approach where (part of) the equalization filters depend on the channel statistics rather than on the actual channel, we tackle in this paper the latency issue by setting to zero the first N taps of the feedback filters. Moreover, we show that precoded partial response (PR) signaling can improve the performance of the resulting scheme, although the achieved gain is smaller than in the case of single-input single-output (SISO) equalization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Requirement Analysis for Privacy Preserving Biometrics in View of Universal Human Rights and Data Protection Regulation.\n \n \n \n \n\n\n \n Whiskerd, N.; Dittmann, J.; and Vielhauer, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 548-552, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553045,\n  author = {N. Whiskerd and J. Dittmann and C. Vielhauer},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Requirement Analysis for Privacy Preserving Biometrics in View of Universal Human Rights and Data Protection Regulation},\n  year = {2018},\n  pages = {548-552},\n  abstract = {Data Protection (DP) and Universal Human Rights are extremely relevant to biometrics, where inherently private data is used for authentication purposes. In this context this paper stresses that there are significant challenges beyond biometric authentication. For example, it has been shown in the existing literature that medical information of a skin disease from a fingerprint, symptoms of diabetes on the retina, or diseases affecting one's walk can be extracted from biometric recordings. We address the derived privacy challenges in biometrics by a careful review of relevant aspects of the universal human rights from UN documents and the EU General Data Protection Regulation (GDPR) with a first identification and enumeration of relevant attributes. From the derived privacy sensitive attributes and respective requirements, de-identification approaches to protection of soft biometrics in face and fingerprints are explored. In consideration of these techniques, there is the question of what constitutes legal and moral biometric signal processing presently in the state-of-the-art, as well as motivation for further work towards fulfilling the criteria.},\n  keywords = {biometrics (access control);data protection;legislation;security of data;requirement analysis;privacy preserving biometrics;universal human rights;biometric authentication;EU General Data Protection Regulation;moral biometric signal;private data;privacy sensitive attributes;GDPR;soft biometrics protection;de-identification approaches;Face;Fingerprint recognition;Privacy;Signal processing;Bioinformatics;Data protection;privacy;soft biometrics;data protection;human rights;GDPR},\n  doi = {10.23919/EUSIPCO.2018.8553045},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437894.pdf},\n}\n\n
\n
\n\n\n
\n Data Protection (DP) and Universal Human Rights are extremely relevant to biometrics, where inherently private data is used for authentication purposes. In this context this paper stresses that there are significant challenges beyond biometric authentication. For example, it has been shown in the existing literature that medical information of a skin disease from a fingerprint, symptoms of diabetes on the retina, or diseases affecting one's walk can be extracted from biometric recordings. We address the derived privacy challenges in biometrics by a careful review of relevant aspects of the universal human rights from UN documents and the EU General Data Protection Regulation (GDPR) with a first identification and enumeration of relevant attributes. From the derived privacy sensitive attributes and respective requirements, de-identification approaches to protection of soft biometrics in face and fingerprints are explored. In consideration of these techniques, there is the question of what constitutes legal and moral biometric signal processing presently in the state-of-the-art, as well as motivation for further work towards fulfilling the criteria.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High Frequency Noise Detection and Handling in ECG Signals.\n \n \n \n \n\n\n \n Le, K.; EftestØl, T.; Engan, K.; Ørn, S.; and Kleiven, Ø.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 46-50, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"HighPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553046,\n  author = {K. Le and T. EftestØl and K. Engan and S. Ørn and Ø. Kleiven},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {High Frequency Noise Detection and Handling in ECG Signals},\n  year = {2018},\n  pages = {46-50},\n  abstract = {After acquisition of new clinical electrocardiogram (ECG) signals the first step is often to preprocess and have a signal quality assessment to uncover noise. There might be restriction on the signal length and other issue that impose limitation where it is not possible to discard the whole signal if noise is present. Thus there is a great need to retain as much noise free regions as possible. A noise detection method is evaluated on a manually annotated subset (2146 leads) of a data base of 12-lead ECG recordings from 1006 bicycle race participants. The aim is to apply the noise detector on the unlabelled part of the data set before any further analysis is conducted. The proposed noise detector can be divided into 3 parts: 1) Select a high frequency signal as a base signal. 2) Apply a thresholding strategy on the base signal. 3) Use a noise detection strategy. In this work receiver operating characteristic (ROC) curve and area under the curve (AUC) will be used to assess a high frequency noise detector designed for ECG signals. Even though ROC analysis is widely used to assess prediction models, it has its own limitation. However, it is a good starting point to assess discriminatory ability. To generate the ROC curve the performance evaluation is based on sample-level. That is, each sample has a label whether it is noise or not. The threshold strategy and the chosen threshold will be the varying factor to generate ROC curves. The best model has an average AUC of 0.862, which shows a good detector to discriminate noise. This threshold strategy will be used for noise detection on the unlabelled part of the data set.},\n  keywords = {electrocardiography;medical signal processing;sensitivity analysis;high frequency noise detection;ECG signals;clinical electrocardiogram;signal quality assessment;noise free regions;noise detection method;12-lead ECG recordings;high frequency signal;high frequency noise detector;ROC analysis;receiver operating characteristic curve;signal thresholding strategy;Electrocardiography;Detectors;Noise reduction;Europe;Signal processing;Standards},\n  doi = {10.23919/EUSIPCO.2018.8553046},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437403.pdf},\n}\n\n
\n
\n\n\n
\n After acquisition of new clinical electrocardiogram (ECG) signals the first step is often to preprocess and have a signal quality assessment to uncover noise. There might be restriction on the signal length and other issue that impose limitation where it is not possible to discard the whole signal if noise is present. Thus there is a great need to retain as much noise free regions as possible. A noise detection method is evaluated on a manually annotated subset (2146 leads) of a data base of 12-lead ECG recordings from 1006 bicycle race participants. The aim is to apply the noise detector on the unlabelled part of the data set before any further analysis is conducted. The proposed noise detector can be divided into 3 parts: 1) Select a high frequency signal as a base signal. 2) Apply a thresholding strategy on the base signal. 3) Use a noise detection strategy. In this work receiver operating characteristic (ROC) curve and area under the curve (AUC) will be used to assess a high frequency noise detector designed for ECG signals. Even though ROC analysis is widely used to assess prediction models, it has its own limitation. However, it is a good starting point to assess discriminatory ability. To generate the ROC curve the performance evaluation is based on sample-level. That is, each sample has a label whether it is noise or not. The threshold strategy and the chosen threshold will be the varying factor to generate ROC curves. The best model has an average AUC of 0.862, which shows a good detector to discriminate noise. This threshold strategy will be used for noise detection on the unlabelled part of the data set.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Dynamic Model of Synthetic Resting-State Brain Hemodynamics.\n \n \n \n \n\n\n \n Ghorbani Afkhami, R.; Low, K.; Walker, F.; and Johnson, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 96-100, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553049,\n  author = {R. {Ghorbani Afkhami} and K. Low and F. Walker and S. Johnson},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Dynamic Model of Synthetic Resting-State Brain Hemodynamics},\n  year = {2018},\n  pages = {96-100},\n  abstract = {Near infrared spectroscopy (NIRS) is an emerging field of brain study. From an engineering perspective, the absence of a ground truth signal or a model for producing synthetic data has hindered understanding of the underlying elements of this signal and validating of existing algorithms. In this paper, a dynamic model of artificial NIRS signal is proposed. The model incorporates arterial pulsations, its possible frequency drifts, Mayer waves, respiratory waves and other very low frequency components. Parameter selection and model fitting has been carried out using measurements from a NIRS database. To be general in the process of parameter selection, our dataset included 4 NIRS devices and 256 channels for each subject, covering all the scalp and therefore providing realistic measures of the varying parameters. Results are compared with the real data in time and frequency domains, both showing high level of resemblance.},\n  keywords = {brain;haemodynamics;infrared spectra;neurophysiology;pneumodynamics;NIRS devices;time domains;near infrared spectroscopy;frequency drifts;synthetic resting-state brain hemodynamics;NIRS database;model fitting;low frequency components;respiratory waves;Mayer waves;arterial pulsations;artificial NIRS signal;synthetic data;ground truth signal;dynamic model;frequency domains;parameter selection;Brain modeling;Heart rate;Signal processing;Bandwidth;Frequency-domain analysis;Standards;Electrocardiography;near infrared spectroscopy;synthetic signal;brain hemodynamics},\n  doi = {10.23919/EUSIPCO.2018.8553049},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435948.pdf},\n}\n\n
\n
\n\n\n
\n Near infrared spectroscopy (NIRS) is an emerging field of brain study. From an engineering perspective, the absence of a ground truth signal or a model for producing synthetic data has hindered understanding of the underlying elements of this signal and validating of existing algorithms. In this paper, a dynamic model of artificial NIRS signal is proposed. The model incorporates arterial pulsations, its possible frequency drifts, Mayer waves, respiratory waves and other very low frequency components. Parameter selection and model fitting has been carried out using measurements from a NIRS database. To be general in the process of parameter selection, our dataset included 4 NIRS devices and 256 channels for each subject, covering all the scalp and therefore providing realistic measures of the varying parameters. Results are compared with the real data in time and frequency domains, both showing high level of resemblance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Colour Hit-or-Miss Transform Based on a Rank Ordered Distance Measure.\n \n \n \n \n\n\n \n Macfarlane, F.; Murray, P.; Marshall, S.; Perret, B.; Evans, A.; and White, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 588-592, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553050,\n  author = {F. Macfarlane and P. Murray and S. Marshall and B. Perret and A. Evans and H. White},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Colour Hit-or-Miss Transform Based on a Rank Ordered Distance Measure},\n  year = {2018},\n  pages = {588-592},\n  abstract = {The Hit-or-Miss Transform (HMT) is a powerful morphological operation that can be utilised in many digital image analysis problems. Its original binary definition and its extension to grey-level images have seen it applied to various template matching and object detection tasks. However, further extending the transform to incorporate colour or multivariate images is problematic since there is no general or intuitive way of ordering data which allows the formal definition of morphological operations in the traditional manner. In this paper, instead of following the usual strategy for Mathematical Morphology, based on the definition of a total order in the colour space, we propose a transform that relies on a colour or multivariate distance measure. As with the traditional HMT operator, our proposed transform uses two structuring elements (SE) - one for the foreground and one for the background - and retains the idea that a good fitting is obtained when the foreground SE is a close match to the image and the background SE matches the image complement. This allows for both flat and non-flat structuring elements to be used in object detection. Furthermore, the use of ranking operations on the computed distances allows the operator to be robust to noise and partial occlusion of objects.},\n  keywords = {image colour analysis;image matching;mathematical morphology;object detection;transforms;Mathematical Morphology;multivariate distance measure;object detection;rank ordered distance measure;grey-level images;template matching;digital image analysis;colour hit-or-miss transform;HMT operator;Image color analysis;Transforms;Morphology;Object detection;Signal processing algorithms;Shape;Europe;Image processing;Mathematical morphology;Hit-or-Miss Transform;Template matching;Object detection},\n  doi = {10.23919/EUSIPCO.2018.8553050},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435168.pdf},\n}\n\n
\n
\n\n\n
\n The Hit-or-Miss Transform (HMT) is a powerful morphological operation that can be utilised in many digital image analysis problems. Its original binary definition and its extension to grey-level images have seen it applied to various template matching and object detection tasks. However, further extending the transform to incorporate colour or multivariate images is problematic since there is no general or intuitive way of ordering data which allows the formal definition of morphological operations in the traditional manner. In this paper, instead of following the usual strategy for Mathematical Morphology, based on the definition of a total order in the colour space, we propose a transform that relies on a colour or multivariate distance measure. As with the traditional HMT operator, our proposed transform uses two structuring elements (SE) - one for the foreground and one for the background - and retains the idea that a good fitting is obtained when the foreground SE is a close match to the image and the background SE matches the image complement. This allows for both flat and non-flat structuring elements to be used in object detection. Furthermore, the use of ranking operations on the computed distances allows the operator to be robust to noise and partial occlusion of objects.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Using Acoustic Parameters for Intelligibility Prediction of Reverberant Speech.\n \n \n \n \n\n\n \n Alghamdi, A.; Chan, W.; and Fogerty, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2534-2538, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"UsingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553051,\n  author = {A. Alghamdi and W. Chan and D. Fogerty},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Using Acoustic Parameters for Intelligibility Prediction of Reverberant Speech},\n  year = {2018},\n  pages = {2534-2538},\n  abstract = {This work addresses the problem of predicting the subjective intelligibility of reverberant speech. Using new subjective listening test data, we evaluate the performance of three objective intelligibility measures that can be computed from the room impulse response. The measures are found to correlate well with the word and phoneme recognition rates of reverberant speech. In particular, one of the examined measures more readily relates to spatial dimensions and hence may be more convenient for environment-tracking acoustic interface applications.},\n  keywords = {acoustic signal processing;architectural acoustics;reverberation;speech enhancement;speech intelligibility;speech recognition;transient response;reverberant speech;subjective listening test data;objective intelligibility measures;room impulse response;phoneme recognition rates;environment-tracking acoustic interface applications;acoustic parameters;intelligibility prediction;subjective intelligibility;word recognition rates;Correlation;Reverberation;Indexes;Acoustic measurements;Degradation;Europe;Reverberant speech;intelligibility measures;room acoustics;room impulse response},\n  doi = {10.23919/EUSIPCO.2018.8553051},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439365.pdf},\n}\n\n
\n
\n\n\n
\n This work addresses the problem of predicting the subjective intelligibility of reverberant speech. Using new subjective listening test data, we evaluate the performance of three objective intelligibility measures that can be computed from the room impulse response. The measures are found to correlate well with the word and phoneme recognition rates of reverberant speech. In particular, one of the examined measures more readily relates to spatial dimensions and hence may be more convenient for environment-tracking acoustic interface applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Layer-wise Score Level Ensemble Framework for Acoustic Scene Classification.\n \n \n \n \n\n\n \n Singh, A.; Thakur, A.; Rajan, P.; and Bhavsar, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 837-841, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553052,\n  author = {A. Singh and A. Thakur and P. Rajan and A. Bhavsar},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Layer-wise Score Level Ensemble Framework for Acoustic Scene Classification},\n  year = {2018},\n  pages = {837-841},\n  abstract = {Scene classification based on acoustic information is a challenging task due to various factors such as the nonstationary nature of the environment and multiple overlapping acoustic events. In this paper, we address the acoustic scene classification problem using SoundNet, a deep convolution neural network, pre-trained on raw audio signals. We propose a classification strategy by combining scores from each layer. This is based on the hypothesis that layers of the deep convolutional network learn complementary information and combining this layer-wise information provides better classification than the features extracted from an individual layer. In addition, we also propose a pooling strategy to reduce the dimensionality of features extracted from different layers of SoundNet. Our experiments on DCASE 2016 acoustic scene classification dataset reveals the effectiveness of this layer-wise ensemble approach. The proposed approach provides a relative improvement of approx. 30.85 % over the classification accuracy provided by the best individual layer of SoundNet.},\n  keywords = {acoustic signal processing;convolution;feature extraction;feedforward neural nets;pattern classification;layer-wise score level ensemble framework;acoustic information;nonstationary nature;multiple overlapping acoustic events;acoustic scene classification problem;deep convolution neural network;raw audio signals;layer-wise information;pooling strategy;feature extraction;acoustic scene classification dataset;Feature extraction;Support vector machines;Acoustics;Convolution;Predictive models;Training;Task analysis},\n  doi = {10.23919/EUSIPCO.2018.8553052},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439421.pdf},\n}\n\n
\n
\n\n\n
\n Scene classification based on acoustic information is a challenging task due to various factors such as the nonstationary nature of the environment and multiple overlapping acoustic events. In this paper, we address the acoustic scene classification problem using SoundNet, a deep convolution neural network, pre-trained on raw audio signals. We propose a classification strategy by combining scores from each layer. This is based on the hypothesis that layers of the deep convolutional network learn complementary information and combining this layer-wise information provides better classification than the features extracted from an individual layer. In addition, we also propose a pooling strategy to reduce the dimensionality of features extracted from different layers of SoundNet. Our experiments on DCASE 2016 acoustic scene classification dataset reveals the effectiveness of this layer-wise ensemble approach. The proposed approach provides a relative improvement of approx. 30.85 % over the classification accuracy provided by the best individual layer of SoundNet.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Privacy-Preserving Indexing of Iris-Codes with Cancelable Bloom Filter-based Search Structures.\n \n \n \n \n\n\n \n Drozdowski, P.; Garg, S.; Rathgeb, C.; Gomez-Barrcro, M.; Chang, D.; and Busch, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2360-2364, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Privacy-PreservingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553053,\n  author = {P. Drozdowski and S. Garg and C. Rathgeb and M. Gomez-Barrcro and D. Chang and C. Busch},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Privacy-Preserving Indexing of Iris-Codes with Cancelable Bloom Filter-based Search Structures},\n  year = {2018},\n  pages = {2360-2364},\n  abstract = {Protecting the privacy of the enrolled subjects is an important requirement expected from biometric systems. In recent years, numerous template protection schemes have been proposed, but so far none of them have been shown to be suitable for indexing (workload reduction) in the computationally expensive identification mode. This paper presents a, best to the authors' knowledge, first method in the scientific literature for indexing protected iris templates. It is based on applying random permutations to Iris-Code rows, and subsequent indexing using Bloom filters and binary search trees. In a security evaluation, the unlinkability, irreversibility and renewability of the method are demonstrated quantitatively. The biometric performance and workload reduction are assessed in an open-set identification scenario on the IITD and CASIA-Iris- Thousand datasets. The method exhibits high biometric performance and reduces the required computational workload to less than 5% of the baseline Iris-Code system.},\n  keywords = {biometrics (access control);data privacy;data structures;feature extraction;iris recognition;random functions;search problems;privacy-preserving indexing;iris-codes;cancelable bloom filter-based search structures;biometric systems;workload reduction;computationally expensive identification mode;random permutations;Iris-Code rows;subsequent indexing;binary search trees;security evaluation;open-set identification scenario;baseline Iris-Code system;template protection schemes;enrolled subject privacy;Indexing;Probes;Europe;Iris recognition;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553053},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437345.pdf},\n}\n\n
\n
\n\n\n
\n Protecting the privacy of the enrolled subjects is an important requirement expected from biometric systems. In recent years, numerous template protection schemes have been proposed, but so far none of them have been shown to be suitable for indexing (workload reduction) in the computationally expensive identification mode. This paper presents a, best to the authors' knowledge, first method in the scientific literature for indexing protected iris templates. It is based on applying random permutations to Iris-Code rows, and subsequent indexing using Bloom filters and binary search trees. In a security evaluation, the unlinkability, irreversibility and renewability of the method are demonstrated quantitatively. The biometric performance and workload reduction are assessed in an open-set identification scenario on the IITD and CASIA-Iris- Thousand datasets. The method exhibits high biometric performance and reduces the required computational workload to less than 5% of the baseline Iris-Code system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Spectral Domain Shape Representation.\n \n \n \n \n\n\n \n Alwaely, B.; and Abhayaratne, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 598-602, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553054,\n  author = {B. Alwaely and C. Abhayaratne},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph Spectral Domain Shape Representation},\n  year = {2018},\n  pages = {598-602},\n  abstract = {One of the major challenges in shape matching is recognising and interpreting the small variations in objects that are distinctly similar in their global structure, as in well known ETU10 silhouette dataset and the Tool dataset. The solution lies in modelling these variations with numerous precise details. This paper presents a novel approach based on fitting shape's local details into an adaptive spectral graph domain features. The proposed framework constructs an adaptive graph model on the boundaries of silhouette images based on threshold, in such a way that reveals small differences. This follows feature extraction on the spectral domain for shape representation. The proposed method shows that interpreting local details leading to improve the accuracy levels by 2% to 7% for the two datasets mentioned above, respectively.},\n  keywords = {feature extraction;graph theory;image classification;image matching;image representation;shape recognition;graph spectral domain shape representation;shape matching;global structure;Tool dataset;fitting shape;adaptive spectral graph domain features;silhouette images;feature extraction;local details;ETU10 silhouette dataset;Shape;Eigenvalues and eigenfunctions;Tools;Two dimensional displays;Machine learning;Europe;Signal processing;Graph spectral analysis;Graph spectral features;Shape matching;Adaptive graph connectivity},\n  doi = {10.23919/EUSIPCO.2018.8553054},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437408.pdf},\n}\n\n
\n
\n\n\n
\n One of the major challenges in shape matching is recognising and interpreting the small variations in objects that are distinctly similar in their global structure, as in well known ETU10 silhouette dataset and the Tool dataset. The solution lies in modelling these variations with numerous precise details. This paper presents a novel approach based on fitting shape's local details into an adaptive spectral graph domain features. The proposed framework constructs an adaptive graph model on the boundaries of silhouette images based on threshold, in such a way that reveals small differences. This follows feature extraction on the spectral domain for shape representation. The proposed method shows that interpreting local details leading to improve the accuracy levels by 2% to 7% for the two datasets mentioned above, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Enhanced Chirp Modulated Golay Code for Ultrasound Diverging Wave Compounding.\n \n \n \n \n\n\n \n Benane, Y. M.; Bujoreanu, D.; Cachard, C.; Nicolas, B.; and Basset, O.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 81-85, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553055,\n  author = {Y. M. Benane and D. Bujoreanu and C. Cachard and B. Nicolas and O. Basset},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Enhanced Chirp Modulated Golay Code for Ultrasound Diverging Wave Compounding},\n  year = {2018},\n  pages = {81-85},\n  abstract = {In ultrasound imaging, a straightforward way of increasing axial resolution is by shortening the transmitted signals. However, short excitation provide low echo signal to noise ratio (eSN R), which results in low image quality. An alternate solution would be to increase the excitation signal's duration (by using binary Golay codes or chirps), and rely on off-line match filtering techniques in order to compress the received echoes. Resolution Enhancement Compression (REC) is a coding technique that provides better axial resolution than conventional pulsing technique while increasing the eSNR. It consists on designing a pre-enhanced chirp that compensates the most attenuated frequency bands of the ultrasound probe. The objective of this study is to combine orthogonal binary codes with REC pre-enhanced chirps in order to boost further the eSNR provided by REC while keeping its good performance in axial resolution. The method is applied to diverging wave compounding. Pairs of diverging waves are transmitted/received/reconstructed simultaneously thanks to the orthogonality property of the Golay codes. The results show that the proposed method is able to obtain a better image quality than conventional pulse imaging. The axial resolution and the bandwidth was improved by 38%/15% in simulation/experiment, for an excitation signal designed to provide a 39% boost. The contrast to noise ratio and the eSNR were improved by 3.5dB and 18.7dB respectively. Acquisition results suggest that the combination between binary codes and modulated enhanced chirps can be implemented in ultrasonic imaging system.},\n  keywords = {binary codes;biomedical ultrasonics;chirp modulation;Golay codes;image coding;image filtering;image reconstruction;image resolution;matched filters;medical image processing;enhanced chirp modulated Golay code;ultrasound diverging wave compounding;axial resolution;transmitted signals;low image quality;excitation signal;binary Golay codes;received echoes;Resolution Enhancement Compression;coding technique;eSNR;ultrasound probe;orthogonal binary codes;ultrasonic imaging system;low echo signal-noise ratio;off-line match filtering techniques;attenuated frequency bands;REC preenhanced chirps;orthogonality property;contrast-noise ratio;Chirp;Probes;Signal resolution;Ultrasonic imaging;Image resolution;Imaging;Image coding;Diverging wave;pulse compression;chirp modulation;resolution enhancement},\n  doi = {10.23919/EUSIPCO.2018.8553055},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438060.pdf},\n}\n\n
\n
\n\n\n
\n In ultrasound imaging, a straightforward way of increasing axial resolution is by shortening the transmitted signals. However, short excitation provide low echo signal to noise ratio (eSN R), which results in low image quality. An alternate solution would be to increase the excitation signal's duration (by using binary Golay codes or chirps), and rely on off-line match filtering techniques in order to compress the received echoes. Resolution Enhancement Compression (REC) is a coding technique that provides better axial resolution than conventional pulsing technique while increasing the eSNR. It consists on designing a pre-enhanced chirp that compensates the most attenuated frequency bands of the ultrasound probe. The objective of this study is to combine orthogonal binary codes with REC pre-enhanced chirps in order to boost further the eSNR provided by REC while keeping its good performance in axial resolution. The method is applied to diverging wave compounding. Pairs of diverging waves are transmitted/received/reconstructed simultaneously thanks to the orthogonality property of the Golay codes. The results show that the proposed method is able to obtain a better image quality than conventional pulse imaging. The axial resolution and the bandwidth was improved by 38%/15% in simulation/experiment, for an excitation signal designed to provide a 39% boost. The contrast to noise ratio and the eSNR were improved by 3.5dB and 18.7dB respectively. Acquisition results suggest that the combination between binary codes and modulated enhanced chirps can be implemented in ultrasonic imaging system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n How Much Will Tiny IoT Nodes Profit from Massive Base Station Arrays?.\n \n \n \n \n\n\n \n Becirovic, E.; Björnson, E.; and Larsson, E. G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 832-836, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"HowPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553057,\n  author = {E. Becirovic and E. Björnson and E. G. Larsson},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {How Much Will Tiny IoT Nodes Profit from Massive Base Station Arrays?},\n  year = {2018},\n  pages = {832-836},\n  abstract = {In this paper we study the benefits that Internet-of-Things (IoT) devices will have from connecting to a massive multiple-input-multiple-output (MIMO) base station. In particular, we study how many users that could be simultaneously spatially multiplexed and how much the range can be increased by deploying massive base station arrays. We also investigate how the devices can scale down their uplink power as the number of antennas grows with retained rates. We consider the uplink and utilize upper and lower bounds on known achievable rate expressions to study the effects of the massive arrays. We conduct a case study where we use simulations in the settings of existing IoT systems to draw realistic conclusions. We find that the gains which ultra narrowband systems get from utilizing massive MIMO are limited by the bandwidth and therefore those systems will not be able to spatially multiplex any significant number of users. We also conclude that the power scaling is highly dependent on the nominal signal-to-noise ratio (SNR) in the single-antenna case.},\n  keywords = {antenna arrays;Internet of Things;MIMO communication;space division multiplexing;wireless channels;massive MIMO;tiny IoT nodes;MIMO base station;ultra narrowband systems;signal-to-noise ratio;single-antenna;massive multiple-input-multiple-output base station;Internet-of-Things devices;massive base station arrays;Base stations;Coherence;Bandwidth;Uplink;MIMO communication;Antennas;Signal to noise ratio},\n  doi = {10.23919/EUSIPCO.2018.8553057},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436750.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we study the benefits that Internet-of-Things (IoT) devices will have from connecting to a massive multiple-input-multiple-output (MIMO) base station. In particular, we study how many users that could be simultaneously spatially multiplexed and how much the range can be increased by deploying massive base station arrays. We also investigate how the devices can scale down their uplink power as the number of antennas grows with retained rates. We consider the uplink and utilize upper and lower bounds on known achievable rate expressions to study the effects of the massive arrays. We conduct a case study where we use simulations in the settings of existing IoT systems to draw realistic conclusions. We find that the gains which ultra narrowband systems get from utilizing massive MIMO are limited by the bandwidth and therefore those systems will not be able to spatially multiplex any significant number of users. We also conclude that the power scaling is highly dependent on the nominal signal-to-noise ratio (SNR) in the single-antenna case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Calibration of Radio Interferometers Using Block LDU Decomposition.\n \n \n \n \n\n\n \n Sardarabadi, A. M.; van der Veen , A.; and Koopmans, L. V. E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2688-2692, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553058,\n  author = {A. M. Sardarabadi and A. {van der Veen} and L. V. E. Koopmans},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Calibration of Radio Interferometers Using Block LDU Decomposition},\n  year = {2018},\n  pages = {2688-2692},\n  abstract = {Having an accurate calibration method is crucial for any scientific research done by a radio telescope. The next generation radio telescopes such as the Square Kilometre Array (SKA) will have a large number of receivers which will produce exabytes of data per day. In this paper we propose new direction-dependent and independent calibration algorithms that, while requiring much less storage during calibration, converge very fast. The calibration problem can be formulated as a non-linear least square optimization problem. We show that combining a block-LDU decomposition with Gauss-Newton iterations produces systems of equations with convergent matrices. This allows significant reduction in complexity per iteration and very fast converging algorithms. We also discuss extensions to direction-dependent calibration. The proposed algorithms are evaluated using simulations.},\n  keywords = {calibration;iterative methods;least squares approximations;matrix algebra;Newton method;optimisation;radiotelescopes;accurate calibration method;scientific research;radio telescope;generation radio;Square Kilometre Array;independent calibration algorithms;nonlinear least square optimization problem;block-LDU decomposition;Gauss-Newton iterations;convergent matrices;direction-dependent calibration;radio interferometers;block LDU decomposition;Calibration;Covariance matrices;Receivers;Mathematical model;Data models;Signal processing algorithms;Radio astronomy;Calibration;Radio Astronomy;Non-Linear Optimization;Covariance Matching},\n  doi = {10.23919/EUSIPCO.2018.8553058},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438736.pdf},\n}\n\n
\n
\n\n\n
\n Having an accurate calibration method is crucial for any scientific research done by a radio telescope. The next generation radio telescopes such as the Square Kilometre Array (SKA) will have a large number of receivers which will produce exabytes of data per day. In this paper we propose new direction-dependent and independent calibration algorithms that, while requiring much less storage during calibration, converge very fast. The calibration problem can be formulated as a non-linear least square optimization problem. We show that combining a block-LDU decomposition with Gauss-Newton iterations produces systems of equations with convergent matrices. This allows significant reduction in complexity per iteration and very fast converging algorithms. We also discuss extensions to direction-dependent calibration. The proposed algorithms are evaluated using simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Beat Tracking using Recurrent Neural Network: A Transfer Learning Approach.\n \n \n \n \n\n\n \n Fiocchi, D.; Buccoli, M.; Zanoni, M.; Antonacci, F.; and Sarti, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1915-1919, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BeatPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553059,\n  author = {D. Fiocchi and M. Buccoli and M. Zanoni and F. Antonacci and A. Sarti},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Beat Tracking using Recurrent Neural Network: A Transfer Learning Approach},\n  year = {2018},\n  pages = {1915-1919},\n  abstract = {Deep learning networks have been successfully applied to solve a large number of tasks. The effectiveness of deep learning networks is limited by the amount and the variety of data used for the training. For this reason, deep-learning networks can be applied in scenarios where a huge amount of data are available. In music information retrieval, this is the case of popular genres due to the wider availability of annotated music pieces. Instead, to find sufficient and useful data is a hard task for non widespread genres, like, for instance, traditional and folk music. To address this issue, Transfer Learning has been proposed, i.e., to train a network using a large available dataset and then transfer the learned knowledge (the hierarchical representation) to another task. In this work, we propose an approach to apply transfer learning for beat tracking. We use a deep BLSTM-based RNN as the starting network trained on popular music, and we transfer it to track beats of Greek folk music. In order to evaluate the effectiveness of our approach, we collect a dataset of Greek folk music, and we manually annotate the pieces.},\n  keywords = {audio signal processing;information retrieval;learning (artificial intelligence);music;recurrent neural nets;deep learning networks;beat tracking;Greek folk music;recurrent neural network;transfer learning approach;Task analysis;Training;Recurrent neural networks;Logic gates;Feature extraction;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553059},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437897.pdf},\n}\n\n
\n
\n\n\n
\n Deep learning networks have been successfully applied to solve a large number of tasks. The effectiveness of deep learning networks is limited by the amount and the variety of data used for the training. For this reason, deep-learning networks can be applied in scenarios where a huge amount of data are available. In music information retrieval, this is the case of popular genres due to the wider availability of annotated music pieces. Instead, to find sufficient and useful data is a hard task for non widespread genres, like, for instance, traditional and folk music. To address this issue, Transfer Learning has been proposed, i.e., to train a network using a large available dataset and then transfer the learned knowledge (the hierarchical representation) to another task. In this work, we propose an approach to apply transfer learning for beat tracking. We use a deep BLSTM-based RNN as the starting network trained on popular music, and we transfer it to track beats of Greek folk music. In order to evaluate the effectiveness of our approach, we collect a dataset of Greek folk music, and we manually annotate the pieces.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analog-to-Feature (A2F) Conversion for Audio-Event Classification.\n \n \n \n \n\n\n \n Liu, X.; Gönültaş, E.; and Studer, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2275-2279, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Analog-to-FeaturePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553060,\n  author = {X. Liu and E. Gönültaş and C. Studer},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Analog-to-Feature (A2F) Conversion for Audio-Event Classification},\n  year = {2018},\n  pages = {2275-2279},\n  abstract = {Always-on sensors continuously monitor the environment for certain events. Such sensors are often integrated on battery-powered devices, e.g., home automation devices or virtual assistants, which require power-efficient classification pipelines. However, conventional classification pipelines that digitize the analog signals at Nyquist rate followed by digital feature extraction and classification are wasteful in a sense that the “feature rate” is generally much smaller than the Nyquist rate. In this paper, we propose a novel classification pipeline called analog-to-feature (A2F) conversion that directly acquires features in the analog domain using non-uniform wavelet sampling (NUWS). Our approach effectively combines Nyquist-rate sampling and digital feature extraction, which has the potential to significantly reduce the power and costs of signal classification. We demonstrate the efficacy of our approach for the detection of audio events and show that NUWS-based A2F conversion is able to outperform existing methods that use compressive sensing.},\n  keywords = {compressed sensing;feature extraction;signal classification;wavelet transforms;analog signals;conventional classification pipelines;power-efficient classification pipelines;home automation devices;audio-event classification;NUWS-based A2F conversion;audio events;signal classification;Nyquist-rate sampling;analog domain;analog-to-feature conversion;Nyquist rate;digital feature extraction;Feature extraction;Dictionaries;Pipelines;Sparse matrices;Task analysis;Artificial neural networks;Sensors},\n  doi = {10.23919/EUSIPCO.2018.8553060},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437508.pdf},\n}\n\n
\n
\n\n\n
\n Always-on sensors continuously monitor the environment for certain events. Such sensors are often integrated on battery-powered devices, e.g., home automation devices or virtual assistants, which require power-efficient classification pipelines. However, conventional classification pipelines that digitize the analog signals at Nyquist rate followed by digital feature extraction and classification are wasteful in a sense that the “feature rate” is generally much smaller than the Nyquist rate. In this paper, we propose a novel classification pipeline called analog-to-feature (A2F) conversion that directly acquires features in the analog domain using non-uniform wavelet sampling (NUWS). Our approach effectively combines Nyquist-rate sampling and digital feature extraction, which has the potential to significantly reduce the power and costs of signal classification. We demonstrate the efficacy of our approach for the detection of audio events and show that NUWS-based A2F conversion is able to outperform existing methods that use compressive sensing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Transformed Locally Linear Manifold Clustering.\n \n \n \n \n\n\n \n Maggu, J.; Majumdar, A.; and Chouzenoux, E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1057-1061, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TransformedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553061,\n  author = {J. Maggu and A. Majumdar and E. Chouzenoux},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Transformed Locally Linear Manifold Clustering},\n  year = {2018},\n  pages = {1057-1061},\n  abstract = {Transform learning is a relatively new analysis formulation for learning a basis to represent signals. This work incorporates the simplest subspace clustering formulation - Locally Linear Manifold Clustering, into the transform learning formulation. The core idea is to perform the clustering task in a transformed domain instead of processing directly the raw samples. The transform analysis step and the clustering are not done piecemeal but are performed jointly through the formulation of a coupled minimization problem. Comparison with state-of-the-art deep learning-based clustering methods and popular subspace clustering techniques shows that our formulation improves upon them.},\n  keywords = {learning (artificial intelligence);minimisation;pattern clustering;transformed Locally Linear Manifold Clustering;relatively new analysis formulation;transform learning formulation;coupled minimization problem;subspace clustering formulation;deep learning-based clustering;subspace clustering techniques;Transforms;Feature extraction;Minimization;Matching pursuit algorithms;Signal processing algorithms;Signal processing;Clustering algorithms;subspace clustering;transform learning;alternating optimization},\n  doi = {10.23919/EUSIPCO.2018.8553061},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436181.pdf},\n}\n\n
\n
\n\n\n
\n Transform learning is a relatively new analysis formulation for learning a basis to represent signals. This work incorporates the simplest subspace clustering formulation - Locally Linear Manifold Clustering, into the transform learning formulation. The core idea is to perform the clustering task in a transformed domain instead of processing directly the raw samples. The transform analysis step and the clustering are not done piecemeal but are performed jointly through the formulation of a coupled minimization problem. Comparison with state-of-the-art deep learning-based clustering methods and popular subspace clustering techniques shows that our formulation improves upon them.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A novel formulation of Independence Detection based on the Sample Characteristic Function.\n \n \n \n \n\n\n \n de Cabrera , F.; and Riba, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2608-2612, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553062,\n  author = {F. {de Cabrera} and J. Riba},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A novel formulation of Independence Detection based on the Sample Characteristic Function},\n  year = {2018},\n  pages = {2608-2612},\n  abstract = {A novel independence test for continuous random sequences is proposed in this paper. The test is based on seeking for coherence in a particular fixed-dimension feature space based on a uniform sampling of the sample characteristic function of the data, providing significant computational advantages over kernel methods. This feature space relates uncorrelation and independence, allowing to analyze the second order statistics as it is encountered in traditional signal processing. As a result, the possibility of utilizing well known correlation tools arises, motivating the usage of Canonical Correlation Analysis as the main tool for detecting independence. Comparative simulation results are provided using a model based on fading AWGN channels.},\n  keywords = {correlation methods;random sequences;signal detection;signal sampling;statistical analysis;sample characteristic function;continuous random sequences;particular fixed-dimension feature space;independence testing;independence detection formulation;kernel methods;canonical correlation analysis;fading AWGN channels;Correlation;Kernel;Signal processing;Probability density function;Random variables;Europe;Coherence;Independence Detection;Characteristic Function;Mutual Information;ITL;Kernel;HSIC;CCA},\n  doi = {10.23919/EUSIPCO.2018.8553062},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437971.pdf},\n}\n\n
\n
\n\n\n
\n A novel independence test for continuous random sequences is proposed in this paper. The test is based on seeking for coherence in a particular fixed-dimension feature space based on a uniform sampling of the sample characteristic function of the data, providing significant computational advantages over kernel methods. This feature space relates uncorrelation and independence, allowing to analyze the second order statistics as it is encountered in traditional signal processing. As a result, the possibility of utilizing well known correlation tools arises, motivating the usage of Canonical Correlation Analysis as the main tool for detecting independence. Comparative simulation results are provided using a model based on fading AWGN channels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance Bounds for Change Point and Time Delay Estimation.\n \n \n \n \n\n\n \n Ren, C.; Renaux, A.; and Azarian, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 281-285, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553063,\n  author = {C. Ren and A. Renaux and S. Azarian},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance Bounds for Change Point and Time Delay Estimation},\n  year = {2018},\n  pages = {281-285},\n  abstract = {This paper investigates performance bounds for joint estimation of signal change point and time delay estimation between two receivers. This problem is formulated as an estimation of the beginning of a well known message and the time shift of its arrival on two receivers from a noisy source. This scenario could be viewed as an extension of the classical time delay estimation [1], [2] with an additional change on the transmitted message. A theoretical derivation of the Barankin bound and a simplified version of this bound are proposed in order to predict the Maximum Likelihood Estimator (MLE) behavior. Simulations illustrate the validity of both bounds but it is pointed out that the MLE seems asymptotically efficient for the normalized time shift estimation contrary to the estimation of message's starting time.},\n  keywords = {delay estimation;maximum likelihood estimation;signal processing;time delay estimation;maximum likelihood estimator behavior;signal change point estimation;Barankin bound;MLE behavior;message starting time estimation;normalized time shift estimation contrary;Receivers;Delay effects;Synchronization;Maximum likelihood estimation;Probability density function;Signal processing;Performance bounds;Change point estimation;Time delay estimation},\n  doi = {10.23919/EUSIPCO.2018.8553063},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436776.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates performance bounds for joint estimation of signal change point and time delay estimation between two receivers. This problem is formulated as an estimation of the beginning of a well known message and the time shift of its arrival on two receivers from a noisy source. This scenario could be viewed as an extension of the classical time delay estimation [1], [2] with an additional change on the transmitted message. A theoretical derivation of the Barankin bound and a simplified version of this bound are proposed in order to predict the Maximum Likelihood Estimator (MLE) behavior. Simulations illustrate the validity of both bounds but it is pointed out that the MLE seems asymptotically efficient for the normalized time shift estimation contrary to the estimation of message's starting time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Method for Sampling Bandlimited Graph Signals.\n \n \n \n \n\n\n \n Olivier Tzamarias, D. E.; Akyazi, P.; and Frossard, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 126-130, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553064,\n  author = {D. E. {Olivier Tzamarias} and P. Akyazi and P. Frossard},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Method for Sampling Bandlimited Graph Signals},\n  year = {2018},\n  pages = {126-130},\n  abstract = {In this paper we propose a novel vertex based sampling method for k-bandlimited signals lying on arbitrary graphs, that has a reasonable computational complexity and results in low reconstruction error. Our goal is to find the smallest set of vertices that can guarantee a perfect reconstruction of any k-bandlimited signal on any connected graph. We propose to iteratively search for the vertices that yield the minimum reconstruction error, by minimizing the maximum eigenvalue of the error covariance matrix using a linear solver. We compare the performance of our method with state-of-the-art sampling strategies and random sampling on graphs. Experimental results show that our method successfully computes the smallest sample sets on arbitrary graphs without any parameter tuning. It provides a small reconstruction error, and is robust to noise.},\n  keywords = {bandlimited signals;computational complexity;covariance matrices;graph theory;iterative methods;signal reconstruction;signal sampling;k-bandlimited signals;arbitrary graphs;low reconstruction error;connected graph;minimum reconstruction error;error covariance matrix;computational complexity;bandlimited graph signal sampling;Signal processing algorithms;Signal processing;Computational complexity;Eigenvalues and eigenfunctions;Bandwidth;Image reconstruction;Europe;Graph signal processing;sampling;spectral graph theory},\n  doi = {10.23919/EUSIPCO.2018.8553064},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437079.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a novel vertex based sampling method for k-bandlimited signals lying on arbitrary graphs, that has a reasonable computational complexity and results in low reconstruction error. Our goal is to find the smallest set of vertices that can guarantee a perfect reconstruction of any k-bandlimited signal on any connected graph. We propose to iteratively search for the vertices that yield the minimum reconstruction error, by minimizing the maximum eigenvalue of the error covariance matrix using a linear solver. We compare the performance of our method with state-of-the-art sampling strategies and random sampling on graphs. Experimental results show that our method successfully computes the smallest sample sets on arbitrary graphs without any parameter tuning. It provides a small reconstruction error, and is robust to noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Steered Mixture-of-Experts Approximation of Spherical Image Data.\n \n \n \n \n\n\n \n Verhack, R.; Madhu, N.; Van Wallendael, G.; Lambert, P.; and Sikora, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 256-260, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SteeredPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553065,\n  author = {R. Verhack and N. Madhu and G. {Van Wallendael} and P. Lambert and T. Sikora},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Steered Mixture-of-Experts Approximation of Spherical Image Data},\n  year = {2018},\n  pages = {256-260},\n  abstract = {Steered Mixture-of-Experts (SMoE) is a novel framework for approximating multidimensional image modalities. Our goal is to provide full Six Degrees-of-Freedom capabilities for camera captured content. Previous research concerned only limited translational movement for which the 4D light field representation is sufficient. However, our goal is to arrive at a representation that allows for unlimited translational-rotational freedom, i.e. our goal is to approximate the full 5D plenoptic function. Until now, SMoE was only applied on Euclidean spaces. However, the plenoptic function contains two spherical coordinate dimensions. In this paper, we propose a methodology to extend the SMoE framework to spherical dimensions. Furthermore, we propose a method to reduce the parameter space to the same two dimensional Euclidean space as for planar 2D images by using a projection of the covariance matrices onto tangent spaces perpendicular to the unit sphere. Finally, we propose a novel training technique for spherical dimensions based on these observations. Experiments performed on omnidirectional 360° images show that the introduction of the dimensionality-reduction projection step results in very low quality loss.},\n  keywords = {approximation theory;cameras;covariance matrices;image representation;learning (artificial intelligence);matrix algebra;spherical image data;multidimensional image modalities;camera captured content;4D light field representation;plenoptic function;spherical coordinate dimensions;SMoE framework;spherical dimensions;parameter space;dimensional Euclidean space;planar 2D images;tangent spaces;omnidirectional images;dimensionality-reduction projection;degrees-of-freedom capabilities;steered mixture-of-experts approximation;limited translational movement;unlimited translational-rotational freedom;two dimensional Euclidean space;covariance matrices;training technique;Kernel;Three-dimensional displays;Two dimensional displays;Image color analysis;Covariance matrices;Signal processing;Transform coding;Steered Mixture-of-Experts;360 images;image approximation;plenoptic function},\n  doi = {10.23919/EUSIPCO.2018.8553065},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438132.pdf},\n}\n\n
\n
\n\n\n
\n Steered Mixture-of-Experts (SMoE) is a novel framework for approximating multidimensional image modalities. Our goal is to provide full Six Degrees-of-Freedom capabilities for camera captured content. Previous research concerned only limited translational movement for which the 4D light field representation is sufficient. However, our goal is to arrive at a representation that allows for unlimited translational-rotational freedom, i.e. our goal is to approximate the full 5D plenoptic function. Until now, SMoE was only applied on Euclidean spaces. However, the plenoptic function contains two spherical coordinate dimensions. In this paper, we propose a methodology to extend the SMoE framework to spherical dimensions. Furthermore, we propose a method to reduce the parameter space to the same two dimensional Euclidean space as for planar 2D images by using a projection of the covariance matrices onto tangent spaces perpendicular to the unit sphere. Finally, we propose a novel training technique for spherical dimensions based on these observations. Experiments performed on omnidirectional 360° images show that the introduction of the dimensionality-reduction projection step results in very low quality loss.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Respiratory Rate Monitoring by Video Processing Using Local Motion Magnification.\n \n \n \n \n\n\n \n Alinovi, D.; Ferrari, G.; Pisani, F.; and Raheli, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1780-1784, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RespiratoryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553066,\n  author = {D. Alinovi and G. Ferrari and F. Pisani and R. Raheli},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Respiratory Rate Monitoring by Video Processing Using Local Motion Magnification},\n  year = {2018},\n  pages = {1780-1784},\n  abstract = {Breathing monitoring by non-contact video processing has been the subject of recent research. This paper presents an advanced video processing algorithm for reliable Respiratory Rate (RR) monitoring based on the analysis of local video variations and a motion magnification method. This novel algorithm improves over the existing solutions in terms of estimation accuracy and excision of large body movements unrelated with respiration. Applications to adults and infants are presented to demonstrate the performance of the proposed algorithm and compare it with previous work.},\n  keywords = {biomedical measurement;image motion analysis;patient monitoring;pneumodynamics;respiratory protection;video signal processing;video processing;reliable respiratory rate monitoring;breathing monitoring;body movements;estimation accuracy;motion magnification method;Monitoring;Signal processing algorithms;Estimation;Streaming media;Data mining;Transforms;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553066},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436693.pdf},\n}\n\n
\n
\n\n\n
\n Breathing monitoring by non-contact video processing has been the subject of recent research. This paper presents an advanced video processing algorithm for reliable Respiratory Rate (RR) monitoring based on the analysis of local video variations and a motion magnification method. This novel algorithm improves over the existing solutions in terms of estimation accuracy and excision of large body movements unrelated with respiration. Applications to adults and infants are presented to demonstrate the performance of the proposed algorithm and compare it with previous work.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reducing the False Alarm Rate for Face Morph Detection by a Morph Pipeline Footprint Detector.\n \n \n \n \n\n\n \n Neubert, T.; Kraetzer, C.; and Dittmann, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1002-1006, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ReducingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553067,\n  author = {T. Neubert and C. Kraetzer and J. Dittmann},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Reducing the False Alarm Rate for Face Morph Detection by a Morph Pipeline Footprint Detector},\n  year = {2018},\n  pages = {1002-1006},\n  abstract = {In this paper, we introduce a novel multi-level process to reduce the false alarm rate (FAR) of existing state-of-the-art face morph detectors, designed to counter the threat that face morphing attacks represent for face image based authentification scenarios. Therefore, we design a novel morph pipeline footprint detector and a novel verification engine to validate the classification results of these existing detectors. The detectors are based on Benford features derived from JPEG DCT coefficients (in the face region of the image and background) and local derivative pattern features. We evaluate the morph pipeline footprint detector with more than 30,000 images and our morph verification engine with false classified authentic images of state-of-the-art approaches. The evaluation shows that our approach is able to reduce the false alarms by 83.67 %.},\n  keywords = {discrete cosine transforms;face recognition;feature extraction;image classification;false classified authentic images;false alarm rate;face morph detection;face morphing attacks;face image;face region;morph verification engine;multilevel process;morph pipeline footprint detector;face morph detectors;JPEG DCT coefficients;Face;Pipelines;Detectors;Feature extraction;Signal processing;Transform coding;Discrete cosine transforms;face morphing;image forensics},\n  doi = {10.23919/EUSIPCO.2018.8553067},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570431995.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a novel multi-level process to reduce the false alarm rate (FAR) of existing state-of-the-art face morph detectors, designed to counter the threat that face morphing attacks represent for face image based authentification scenarios. Therefore, we design a novel morph pipeline footprint detector and a novel verification engine to validate the classification results of these existing detectors. The detectors are based on Benford features derived from JPEG DCT coefficients (in the face region of the image and background) and local derivative pattern features. We evaluate the morph pipeline footprint detector with more than 30,000 images and our morph verification engine with false classified authentic images of state-of-the-art approaches. The evaluation shows that our approach is able to reduce the false alarms by 83.67 %.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Tracking and Sensor Fusion in Direction of Arrival Estimation Using Optimal Mass Transport.\n \n \n \n \n\n\n \n Elvander, F.; Haasler, I.; Jakobsson, A.; and Karlsson, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1617-1621, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TrackingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553068,\n  author = {F. Elvander and I. Haasler and A. Jakobsson and J. Karlsson},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Tracking and Sensor Fusion in Direction of Arrival Estimation Using Optimal Mass Transport},\n  year = {2018},\n  pages = {1617-1621},\n  abstract = {In this work, we propose new methods for information fusion and tracking in direction of arrival (DOA) estimation by utilizing an optimal mass transport framework. Sensor array measurements in DOA estimation may not be consistent due to misalignments and calibration errors. By using optimal mass transport as a notion of distance for combining the information obtained from all the sensor arrays, we obtain an approach that can prevent aliasing and is robust to array misalignments. For the case of sequential tracking, the proposed method updates the DOA estimate using the new measurements and an optimal mass transport prior. In the case of sensor fusion, information from several, individual, sensor arrays is combined using a barycenter formulation of optimal mass transport.},\n  keywords = {array signal processing;calibration;direction-of-arrival estimation;optimisation;sensor fusion;direction of arrival estimation;sequential tracking;barycenter formulation;information fusion;sensor fusion;DOA estimation;sensor array measurements;optimal mass transport framework;Direction-of-arrival estimation;Covariance matrices;Estimation;Sensor fusion;Array signal processing;Target tracking;Optimal mass transport;Spectral estimation;Direction of arrival;Sensor fusion;Target tracking},\n  doi = {10.23919/EUSIPCO.2018.8553068},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437140.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we propose new methods for information fusion and tracking in direction of arrival (DOA) estimation by utilizing an optimal mass transport framework. Sensor array measurements in DOA estimation may not be consistent due to misalignments and calibration errors. By using optimal mass transport as a notion of distance for combining the information obtained from all the sensor arrays, we obtain an approach that can prevent aliasing and is robust to array misalignments. For the case of sequential tracking, the proposed method updates the DOA estimate using the new measurements and an optimal mass transport prior. In the case of sensor fusion, information from several, individual, sensor arrays is combined using a barycenter formulation of optimal mass transport.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Comparative Study of Orthogonal Moments for Micro-Doppler Classification.\n \n \n \n \n\n\n \n Machhour, S.; Grivel, E.; Legrand, P.; Corretja, V.; and Magnant, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 366-370, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553069,\n  author = {S. Machhour and E. Grivel and P. Legrand and V. Corretja and C. Magnant},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Comparative Study of Orthogonal Moments for Micro-Doppler Classification},\n  year = {2018},\n  pages = {366-370},\n  abstract = {Micro-Doppler induced by mechanical vibrating or rotating structures in a radar target is possibly useful for its detection, classification and recognition. In a previous work, pseudo-Zernike moments (PZMs) were used as micro-Doppler features for classification. Despite of their promising classification rates, the choice of PZMs is debatable because other types of moments exist. In this paper, our purpose is to compare various kinds of micro-Doppler features such as Zernike moments, PZMs, orthogonal Mellin-Fourier moments, Legendre moments and Krawtchouk moments in order to evaluate which moments are the most relevant in terms of reconstruction ability, computational cost and micro-Doppler classification rate. Advantages and drawbacks of each family of moments are also given. Through the simulations we carried out, when the signal is disturbed by an additive white noise and the signal-to-noise ratio is low, the use of Krawtchouk moments as micro-Doppler features turns out to be the best compromise.},\n  keywords = {Doppler radar;feature extraction;Fourier analysis;image classification;image recognition;image reconstruction;method of moments;radar detection;radar imaging;white noise;Zernike polynomials;PZM;mechanical vibrating structures;rotating structures;additive white noise;pseudoZernike moments;microDoppler classification rate;Krawtchouk moments;Legendre moments;orthogonal Mellin-Fourier moments;microDoppler features;Radar;Time-frequency analysis;Jacobian matrices;Image reconstruction;Spectrogram;Europe;─Radar;Micro-Doppler;Features;Moments;Classification},\n  doi = {10.23919/EUSIPCO.2018.8553069},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436963.pdf},\n}\n\n
\n
\n\n\n
\n Micro-Doppler induced by mechanical vibrating or rotating structures in a radar target is possibly useful for its detection, classification and recognition. In a previous work, pseudo-Zernike moments (PZMs) were used as micro-Doppler features for classification. Despite of their promising classification rates, the choice of PZMs is debatable because other types of moments exist. In this paper, our purpose is to compare various kinds of micro-Doppler features such as Zernike moments, PZMs, orthogonal Mellin-Fourier moments, Legendre moments and Krawtchouk moments in order to evaluate which moments are the most relevant in terms of reconstruction ability, computational cost and micro-Doppler classification rate. Advantages and drawbacks of each family of moments are also given. Through the simulations we carried out, when the signal is disturbed by an additive white noise and the signal-to-noise ratio is low, the use of Krawtchouk moments as micro-Doppler features turns out to be the best compromise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Theoretical Study of Multiscale Permutation Entropy on Finite-Length Fractional Gaussian Noise.\n \n \n \n \n\n\n \n Dávalos, A.; Jabloun, M.; Ravier, P.; and Buttelli, O.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1087-1091, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TheoreticalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553070,\n  author = {A. Dávalos and M. Jabloun and P. Ravier and O. Buttelli},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Theoretical Study of Multiscale Permutation Entropy on Finite-Length Fractional Gaussian Noise},\n  year = {2018},\n  pages = {1087-1091},\n  abstract = {Permutation Entropy has been used as a robust and fast approach to calculate complexity of time series. There have been extensive studies on the properties and behavior of Permutation Entropy on known signals. Similarly, Multiscale Permutation Entropy has been used to analyze the structures at different time scales. Nevertheless, the Permutation Entropy is constrained by signal length, a problem which is accentuated with Multiscaling. We have analyzed the fractional Gaussian noise under a Multiscale Permutation Entropy analysis, taking into account the effect of finite-length signals across all scales. We found the Permutation Entropy value of Fractional Gaussian noise to be invariant to time scale. Nonetheless, a finite-length linear approximation for scale dependency is found as a result solely from the finite-length constrains of the method.},\n  keywords = {entropy;Gaussian noise;time series;theoretical study;finite-length Fractional Gaussian noise;signal length;Multiscale Permutation Entropy analysis;Permutation Entropy value;time scale;finite-length linear approximation;Entropy;Gaussian noise;Correlation;Time series analysis;Complexity theory;Europe;Multiscale Permutation Entropy;Fractional Brownian Motion;Fractional Gaussian Noise;Finite-Length Time Series},\n  doi = {10.23919/EUSIPCO.2018.8553070},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436753.pdf},\n}\n\n
\n
\n\n\n
\n Permutation Entropy has been used as a robust and fast approach to calculate complexity of time series. There have been extensive studies on the properties and behavior of Permutation Entropy on known signals. Similarly, Multiscale Permutation Entropy has been used to analyze the structures at different time scales. Nevertheless, the Permutation Entropy is constrained by signal length, a problem which is accentuated with Multiscaling. We have analyzed the fractional Gaussian noise under a Multiscale Permutation Entropy analysis, taking into account the effect of finite-length signals across all scales. We found the Permutation Entropy value of Fractional Gaussian noise to be invariant to time scale. Nonetheless, a finite-length linear approximation for scale dependency is found as a result solely from the finite-length constrains of the method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modeling the Pairwise Disparities in High Density Camera Arrays.\n \n \n \n \n\n\n \n Tabus, I.; and Astola, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 221-225, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ModelingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553071,\n  author = {I. Tabus and P. Astola},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Modeling the Pairwise Disparities in High Density Camera Arrays},\n  year = {2018},\n  pages = {221-225},\n  abstract = {We discuss in this paper models for the disparity information needed when pairwise warping the angular views in a light field data set formed of N views. In one scenario of light field data compression, first a set of M reference views is encoded and then each of the remaining views is predicted by warping several reference views using disparity information. The necessary disparity information in this case may be as high as M(N-1) pairwise view disparity maps, estimated and transmitted independently for each pair (reference, target). We propose an estimation model which can be used in a flexible way for any selected configuration of references and predicted views. We study the estimation of the global model from the matching information provided by a pairwise matching program. The model may be defined in several ways, by considering the vertical and horizontal matches at various views and by allowing different model parameters for the regions from a segmentation of the scene. The regions based model is shown to perform better than a single region model. The performance of the model in synthesizing the unseen color views at specified locations in the views array is presented for several configurations of the estimation and prediction sets.},\n  keywords = {cameras;data compression;image coding;image colour analysis;image matching;image segmentation;single region model;unseen color views;prediction sets;light field data set;light field data compression;matching information;pairwise matching program;vertical matches;horizontal matches;disparity information;high density camera arrays;scene segmentation;pairwise warping;reference view encoding;pairwise view disparity maps;estimation configurations;Estimation;Mathematical model;Cameras;Image color analysis;Optical imaging;Signal processing;Optical distortion},\n  doi = {10.23919/EUSIPCO.2018.8553071},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439449.pdf},\n}\n\n
\n
\n\n\n
\n We discuss in this paper models for the disparity information needed when pairwise warping the angular views in a light field data set formed of N views. In one scenario of light field data compression, first a set of M reference views is encoded and then each of the remaining views is predicted by warping several reference views using disparity information. The necessary disparity information in this case may be as high as M(N-1) pairwise view disparity maps, estimated and transmitted independently for each pair (reference, target). We propose an estimation model which can be used in a flexible way for any selected configuration of references and predicted views. We study the estimation of the global model from the matching information provided by a pairwise matching program. The model may be defined in several ways, by considering the vertical and horizontal matches at various views and by allowing different model parameters for the regions from a segmentation of the scene. The regions based model is shown to perform better than a single region model. The performance of the model in synthesizing the unseen color views at specified locations in the views array is presented for several configurations of the estimation and prediction sets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Holistic Automatic Method for Grouping Facial Features Based on Their Appearance.\n \n \n \n \n\n\n \n Fuentes-Hurtado, F.; Diego-Mas, J. A.; Naranjo, V.; and Alcañiz, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1322-1326, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553073,\n  author = {F. Fuentes-Hurtado and J. A. Diego-Mas and V. Naranjo and M. Alcañiz},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Holistic Automatic Method for Grouping Facial Features Based on Their Appearance},\n  year = {2018},\n  pages = {1322-1326},\n  abstract = {Classification or typology systems used to categorize different human body parts exist for many years. Nevertheless, there are very few taxonomies of facial features. A reason for this might be that classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. Therefore, this work presents a computer-based procedure to classify facial features based on their global appearance automatically. First, facial features are located, extracted and aligned using a facial landmark detector. Then, images are characterized using the eigenpictures approach. We then perform a clustering of each type of facial feature using as input the weights extracted from the eigenpictures approach. Finally, we validate the obtained clusterings with humans. This procedure deals with the difficulties associated with classifying features using judgments from human observers and facilitates the development of taxonomies of facial features. Taxonomies obtained with this procedure are presented for eyes, noses and mouths.},\n  keywords = {face recognition;feature extraction;image classification;object detection;human observers;facial feature;feature taxonomy;feature clustering;feature extraction;eigenpictures approach;facial landmark detector;classification systems;typology systems;holistic automatic method;Facial features;Mouth;Feature extraction;Taxonomy;Nose;Indexes},\n  doi = {10.23919/EUSIPCO.2018.8553073},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437321.pdf},\n}\n\n
\n
\n\n\n
\n Classification or typology systems used to categorize different human body parts exist for many years. Nevertheless, there are very few taxonomies of facial features. A reason for this might be that classifying isolated facial features is difficult for human observers. Previous works reported low inter-observer and intra-observer agreement in the evaluation of facial features. Therefore, this work presents a computer-based procedure to classify facial features based on their global appearance automatically. First, facial features are located, extracted and aligned using a facial landmark detector. Then, images are characterized using the eigenpictures approach. We then perform a clustering of each type of facial feature using as input the weights extracted from the eigenpictures approach. Finally, we validate the obtained clusterings with humans. This procedure deals with the difficulties associated with classifying features using judgments from human observers and facilitates the development of taxonomies of facial features. Taxonomies obtained with this procedure are presented for eyes, noses and mouths.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Defect Detection from 3D Ultrasonic Measurements Using Matrix-free Sparse Recovery Algorithms.\n \n \n \n \n\n\n \n Semper, S.; Kirchhof, J.; Wagner, C.; Krieg, F.; Römer, F.; Osman, A.; and Del Galdo, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1700-1704, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DefectPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553074,\n  author = {S. Semper and J. Kirchhof and C. Wagner and F. Krieg and F. Römer and A. Osman and G. {Del Galdo}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Defect Detection from 3D Ultrasonic Measurements Using Matrix-free Sparse Recovery Algorithms},\n  year = {2018},\n  pages = {1700-1704},\n  abstract = {In this paper, we propose an efficient matrix-free algorithm to reconstruct locations and size of flaws in a specimen from volumetric ultrasound data by means of a native 3D Sparse Signal Recovery scheme using Orthogonal Matching Pursuit (OMP). The efficiency of the proposed approach is achieved in two ways. First, we formulate the dictionary matrix as a block multilevel Toeplitz matrix to minimize redundancy and thus memory consumption. Second, we exploit this specific structure in the dictionary to speed up the correlation step in OMP, which is implemented matrix-free. We compare our method to state-of-the-art, namely 3D Synthetic Aperture Focusing Technique, and show that it delivers a visually comparable performance, while it gains the additional freedom to use further methods such as Compressed Sensing.},\n  keywords = {iterative methods;matrix multiplication;signal processing;time-frequency analysis;Toeplitz matrices;ultrasonic measurement;compressed sensing;memory consumption;matrix-free sparse recovery algorithms;native 3D sparse signal recovery scheme;3D synthetic aperture focusing technique;orthogonal matching pursuit;block multilevel Toeplitz matrix;dictionary matrix;OMP;volumetric ultrasound data;3D ultrasonic measurements;defect detection;Sparse matrices;Matching pursuit algorithms;Acoustics;Three-dimensional displays;Ultrasonic variables measurement;Signal processing algorithms;Dictionaries},\n  doi = {10.23919/EUSIPCO.2018.8553074},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432313.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an efficient matrix-free algorithm to reconstruct locations and size of flaws in a specimen from volumetric ultrasound data by means of a native 3D Sparse Signal Recovery scheme using Orthogonal Matching Pursuit (OMP). The efficiency of the proposed approach is achieved in two ways. First, we formulate the dictionary matrix as a block multilevel Toeplitz matrix to minimize redundancy and thus memory consumption. Second, we exploit this specific structure in the dictionary to speed up the correlation step in OMP, which is implemented matrix-free. We compare our method to state-of-the-art, namely 3D Synthetic Aperture Focusing Technique, and show that it delivers a visually comparable performance, while it gains the additional freedom to use further methods such as Compressed Sensing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-stereo Matching for Light Field Camera Arrays.\n \n \n \n \n\n\n \n Rogge, S.; Ceulemans, B.; Bolsée, Q.; and Munteanu, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 251-255, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-stereoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553075,\n  author = {S. Rogge and B. Ceulemans and Q. Bolsée and A. Munteanu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-stereo Matching for Light Field Camera Arrays},\n  year = {2018},\n  pages = {251-255},\n  abstract = {Light field cameras capture information about the incoming light from multiple directions, going beyond classical capturing of light intensity performed by regular RGB cameras. This enables the computation of more accurate depth maps compared to stereo methods based on conventional cameras. However, the very small angular resolution of light field cameras limits their practical use in 3D applications. In this paper, we introduce for the first time in the literature the use of light field camera arrays, with the aim of improving the depth maps while providing a wide field of view. In this context, a novel algorithm for multi-stereo matching based on light field camera arrays is proposed. The disparity maps for the sub-aperture images are computed based on light field camera pairs using a novel multi-scale and multi-window stereo-matching algorithm. A global energy minimization based on belief propagation is proposed to regularize the results. The resulting depth maps are efficiently fused by means of k-means clustering. The proposed approach demonstrates very promising results for accurate 3D scene reconstruction and free navigation applications.},\n  keywords = {cameras;image matching;image reconstruction;image sensors;stereo image processing;multistereo matching;light field camera arrays;light field cameras capture information;incoming light;light intensity;regular RGB cameras;light field camera pairs;multiwindow stereo-matching algorithm;Cameras;Microsoft Windows;Three-dimensional displays;Signal processing algorithms;Estimation;Belief propagation;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553075},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437851.pdf},\n}\n\n
\n
\n\n\n
\n Light field cameras capture information about the incoming light from multiple directions, going beyond classical capturing of light intensity performed by regular RGB cameras. This enables the computation of more accurate depth maps compared to stereo methods based on conventional cameras. However, the very small angular resolution of light field cameras limits their practical use in 3D applications. In this paper, we introduce for the first time in the literature the use of light field camera arrays, with the aim of improving the depth maps while providing a wide field of view. In this context, a novel algorithm for multi-stereo matching based on light field camera arrays is proposed. The disparity maps for the sub-aperture images are computed based on light field camera pairs using a novel multi-scale and multi-window stereo-matching algorithm. A global energy minimization based on belief propagation is proposed to regularize the results. The resulting depth maps are efficiently fused by means of k-means clustering. The proposed approach demonstrates very promising results for accurate 3D scene reconstruction and free navigation applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fusion of EEG and fMRI via Soft Coupled Tensor Decompositions.\n \n \n \n \n\n\n \n Chatzichristos, C.; Davies, M.; Escudero, J.; Kofidis, E.; and Theodoridis, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 56-60, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FusionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553077,\n  author = {C. Chatzichristos and M. Davies and J. Escudero and E. Kofidis and S. Theodoridis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fusion of EEG and fMRI via Soft Coupled Tensor Decompositions},\n  year = {2018},\n  pages = {56-60},\n  abstract = {Data fusion refers to the joint analysis of multiple datasets which provide complementary views of the same task. In this paper, the problem of jointly analyzing electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI) data is considered. Analyzing both EEG and fMRI measurements is highly beneficial for studying brain function because these modalities have complementary spatiotemporal resolutions: EEG offers good temporal resolution while fMRI offers good spatial resolution. The fusion methods reported so far ignore the underlying multi-way nature of the data in at least one of the modalities and/or rely on very strong assumptions concerning the relation among the respective data sets. In this paper, these two points are addressed by adopting tensor models for both modalities and by following a soft coupling approach to implement the fused analysis. To cope with the subject variability in EEG, the PARAFAC2 model is adopted. The results obtained are compared against those of Parallel ICA and hard coupling alternatives in both simulated and real data. Our results confirm the superiority of tensorial methods over methods based on ICA. In scenarios that do not meet the assumptions underlying hard coupling, the advantage of soft coupled decompositions is clearly demonstrated.},\n  keywords = {biomedical MRI;electroencephalography;image resolution;medical image processing;sensor fusion;spatiotemporal phenomena;tensors;functional magnetic resonance imaging data;electroencephalography;EEG;fMRI;PARAFAC2 model;fused analysis;soft coupling approach;tensor models;fusion methods;complementary spatiotemporal resolutions;brain function;multiple datasets;joint analysis;data fusion;soft coupled tensor decompositions;Functional magnetic resonance imaging;Electroencephalography;Tensile stress;Couplings;Brain modeling;Spatial resolution},\n  doi = {10.23919/EUSIPCO.2018.8553077},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437725.pdf},\n}\n\n
\n
\n\n\n
\n Data fusion refers to the joint analysis of multiple datasets which provide complementary views of the same task. In this paper, the problem of jointly analyzing electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI) data is considered. Analyzing both EEG and fMRI measurements is highly beneficial for studying brain function because these modalities have complementary spatiotemporal resolutions: EEG offers good temporal resolution while fMRI offers good spatial resolution. The fusion methods reported so far ignore the underlying multi-way nature of the data in at least one of the modalities and/or rely on very strong assumptions concerning the relation among the respective data sets. In this paper, these two points are addressed by adopting tensor models for both modalities and by following a soft coupling approach to implement the fused analysis. To cope with the subject variability in EEG, the PARAFAC2 model is adopted. The results obtained are compared against those of Parallel ICA and hard coupling alternatives in both simulated and real data. Our results confirm the superiority of tensorial methods over methods based on ICA. In scenarios that do not meet the assumptions underlying hard coupling, the advantage of soft coupled decompositions is clearly demonstrated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse Time-Frequency Representation of Gravitational-Wave signals in Unions of Wilson Bases.\n \n \n \n \n\n\n \n Bammey, Q.; Bacon, P.; Mottin, E. C.; Fraysse, A.; and Jaffard, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1755-1759, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553079,\n  author = {Q. Bammey and P. Bacon and E. C. Mottin and A. Fraysse and S. Jaffard},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse Time-Frequency Representation of Gravitational-Wave signals in Unions of Wilson Bases},\n  year = {2018},\n  pages = {1755-1759},\n  abstract = {We investigate the question of obtaining a reduced time-frequency description of a chirp type signal that can be used as reference pattern in time-frequency searches. This is particularly relevant for searches of transient gravitational waves from astrophysical sources such as the mergers of neutron stars and/or black holes, the main area of this study. Sparse approximation algorithms that allow constraints on the approximation error do not perform well when the decomposition bases are redundant. This study puts in evidence some of the shortcomings of sparse approximation algorithms when dealing with unions of highly correlated bases, a case that currently lacks of a comprehensive mathematical analysis, and proposes solutions to mitigate them. We propose a variation of the matching pursuit algorithm that improves its robustness in the context of gravitational waves patterns construction. We also compare this algorithm to standard sparse approximation methods.},\n  keywords = {approximation theory;iterative methods;signal processing;time-frequency analysis;sparse time-frequency representation;Wilson bases;reduced time-frequency description;chirp type signal;astrophysical sources;neutron stars;black holes;decomposition bases;matching pursuit algorithm;standard sparse approximation methods;transient gravitational-wave signals;sparse approximation error algorithms;mathematical analysis;gravitational wave pattern construction;Matching pursuit algorithms;Approximation algorithms;Signal processing algorithms;Time-frequency analysis;Sparse representation;Approximation error;Chirp},\n  doi = {10.23919/EUSIPCO.2018.8553079},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438007.pdf},\n}\n\n
\n
\n\n\n
\n We investigate the question of obtaining a reduced time-frequency description of a chirp type signal that can be used as reference pattern in time-frequency searches. This is particularly relevant for searches of transient gravitational waves from astrophysical sources such as the mergers of neutron stars and/or black holes, the main area of this study. Sparse approximation algorithms that allow constraints on the approximation error do not perform well when the decomposition bases are redundant. This study puts in evidence some of the shortcomings of sparse approximation algorithms when dealing with unions of highly correlated bases, a case that currently lacks of a comprehensive mathematical analysis, and proposes solutions to mitigate them. We propose a variation of the matching pursuit algorithm that improves its robustness in the context of gravitational waves patterns construction. We also compare this algorithm to standard sparse approximation methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rate Distortion Optimized Graph Partitioning for Omnidirectional Image Coding.\n \n \n \n \n\n\n \n Rizkallah, M.; De Simone, F.; Maugey, T.; Guillemot, C.; and Frossard, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 897-901, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553080,\n  author = {M. Rizkallah and F. {De Simone} and T. Maugey and C. Guillemot and P. Frossard},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Rate Distortion Optimized Graph Partitioning for Omnidirectional Image Coding},\n  year = {2018},\n  pages = {897-901},\n  abstract = {Omnidirectional images are spherical signals captured by cameras with 360-degree field of view. In order to be compressed using existing encoders, these signals are mapped to planar domain. A commonly used planar representation is the equirectangular one, which corresponds to a non uniform sampling pattern on the spherical surface. This particularity is not explored in traditional image compression schemes, which treat the input signal as a classical perspective image. In this work, we build a graph-based coder adapted to the spherical surface. We build a graph directly on the sphere. Then, to have computationally feasible graph transforms, we propose a ratedistortion optimized graph partitioning algorithm to achieve an effective trade-off between the distortion of the reconstructed signals, the smoothness of the signal on each subgraph, and the cost of coding the graph partitioning description. Experimental results demonstrate that our method outperforms JPEG coding of planar equirectangular images.},\n  keywords = {data compression;distortion;graph theory;image coding;rate distortion theory;rate distortion optimized graph partitioning;omnidirectional image coding;omnidirectional images;spherical signals;cameras;nonuniform sampling pattern;spherical surface;input signal;classical perspective image;graph-based coder;computationally feasible graph transforms;ratedistortion optimized graph partitioning algorithm;reconstructed signals;graph partitioning description;JPEG coding;planar equirectangular images;planar representation;Image coding;Transforms;Distortion;Encoding;Laplace equations;Eigenvalues and eigenfunctions;Image edge detection},\n  doi = {10.23919/EUSIPCO.2018.8553080},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437741.pdf},\n}\n\n
\n
\n\n\n
\n Omnidirectional images are spherical signals captured by cameras with 360-degree field of view. In order to be compressed using existing encoders, these signals are mapped to planar domain. A commonly used planar representation is the equirectangular one, which corresponds to a non uniform sampling pattern on the spherical surface. This particularity is not explored in traditional image compression schemes, which treat the input signal as a classical perspective image. In this work, we build a graph-based coder adapted to the spherical surface. We build a graph directly on the sphere. Then, to have computationally feasible graph transforms, we propose a ratedistortion optimized graph partitioning algorithm to achieve an effective trade-off between the distortion of the reconstructed signals, the smoothness of the signal on each subgraph, and the cost of coding the graph partitioning description. Experimental results demonstrate that our method outperforms JPEG coding of planar equirectangular images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Information Fusion based Quality Enhancement for 3D Stereo Images Using CNN.\n \n \n \n \n\n\n \n Jin, Z.; Luo, H.; Luo, L.; Zou, W.; Li, X.; and Steinbach, E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1447-1451, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"InformationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553082,\n  author = {Z. Jin and H. Luo and L. Luo and W. Zou and X. Li and E. Steinbach},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Information Fusion based Quality Enhancement for 3D Stereo Images Using CNN},\n  year = {2018},\n  pages = {1447-1451},\n  abstract = {Stereo images provide users with a vivid 3D watching experience. Supported by per-view depth maps, 3D stereo images can be used to generate any required intermediate view between the given left and right stereo views. However, 3D stereo images lead to higher transmission and storage cost compared to single view images. Based on the binocular suppression theory, mixed-quality stereo images can alleviate this problem by employing different compression ratios on the two views. This causes noticeable visual artifacts when a high compression ratio is adopted and limits free-viewpoint applications. Hence, the low quality image at the receiver side needs to be enhanced to match the high quality one. To address this problem, in this paper we propose an end-to-end fully Convolutional Neural Network (CNN) for enhancing the low quality images in quality asymmetric stereo images by exploiting inter-view correlation. The proposed network achieves an image quality boost of up to 4.6dB and 3.88dB PSNR gain over ordinary JPEG for QF10 and 20, respectively, and an improvement of up to 2.37dB and 2.05dB over the state-of-the-art CNN-based results for QF10 and 20, respectively.},\n  keywords = {data compression;feedforward neural nets;image coding;image reconstruction;stereo image processing;required intermediate view;stereo views;single view images;mixed-quality stereo images;low quality image;quality asymmetric stereo images;image quality boost;quality enhancement;vivid 3D watching experience;per-view depth maps;PSNR gain;Image coding;Transform coding;Convolution;Feature extraction;Image reconstruction;Hafnium;Three-dimensional displays},\n  doi = {10.23919/EUSIPCO.2018.8553082},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437356.pdf},\n}\n\n
\n
\n\n\n
\n Stereo images provide users with a vivid 3D watching experience. Supported by per-view depth maps, 3D stereo images can be used to generate any required intermediate view between the given left and right stereo views. However, 3D stereo images lead to higher transmission and storage cost compared to single view images. Based on the binocular suppression theory, mixed-quality stereo images can alleviate this problem by employing different compression ratios on the two views. This causes noticeable visual artifacts when a high compression ratio is adopted and limits free-viewpoint applications. Hence, the low quality image at the receiver side needs to be enhanced to match the high quality one. To address this problem, in this paper we propose an end-to-end fully Convolutional Neural Network (CNN) for enhancing the low quality images in quality asymmetric stereo images by exploiting inter-view correlation. The proposed network achieves an image quality boost of up to 4.6dB and 3.88dB PSNR gain over ordinary JPEG for QF10 and 20, respectively, and an improvement of up to 2.37dB and 2.05dB over the state-of-the-art CNN-based results for QF10 and 20, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Steerable Circular Differential Microphone Arrays.\n \n \n \n \n\n\n \n Lovatello, J.; Bernardini, A.; and Sarti, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 11-15, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SteerablePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553083,\n  author = {J. Lovatello and A. Bernardini and A. Sarti},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Steerable Circular Differential Microphone Arrays},\n  year = {2018},\n  pages = {11-15},\n  abstract = {An efficient continuous beam steering method, applicable to differential microphones of any order, has been recently developed. Given two identical reference beams, pointing in two different directions, the method allows to derive a beam of nearly constant shape continuously steerable between those two directions. In this paper, the steering method is applied to robust Differential Microphone Arrays (DMAs) characterized by uniform circular array geometries. In particular, a generalized filter performing the steering operation is defined. The definition of such a filter enables the derivation of closed-form formulas for computing the white noise gain and the directivity factor of the designed steerable differential beamformers for any frequency of interest. A study on the shape invariance of the steered beams is conducted. Applications of the steering approach to first-, second-and third-order robust circular DMAs are presented.},\n  keywords = {acoustic signal processing;array signal processing;beam steering;filtering theory;microphone arrays;white noise;steering operation;closed-form formulas;directivity factor;shape invariance;steered beams;third-order robust circular DMAs;steerable circular Differential Microphone;efficient continuous beam steering method;identical reference;constant shape;robust Differential Microphone Arrays;uniform circular array geometries;generalized filter;steerable differential beamformers;Geometry;Array signal processing;Microphone arrays;Beam steering;Shape},\n  doi = {10.23919/EUSIPCO.2018.8553083},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436759.pdf},\n}\n\n
\n
\n\n\n
\n An efficient continuous beam steering method, applicable to differential microphones of any order, has been recently developed. Given two identical reference beams, pointing in two different directions, the method allows to derive a beam of nearly constant shape continuously steerable between those two directions. In this paper, the steering method is applied to robust Differential Microphone Arrays (DMAs) characterized by uniform circular array geometries. In particular, a generalized filter performing the steering operation is defined. The definition of such a filter enables the derivation of closed-form formulas for computing the white noise gain and the directivity factor of the designed steerable differential beamformers for any frequency of interest. A study on the shape invariance of the steered beams is conducted. Applications of the steering approach to first-, second-and third-order robust circular DMAs are presented.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coherence Constrained Alternating Least Squares.\n \n \n \n \n\n\n \n Farias, R. C.; de Morais Goulart , J. H.; and Comon, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 613-617, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CoherencePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553084,\n  author = {R. C. Farias and J. H. {de Morais Goulart} and P. Comon},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Coherence Constrained Alternating Least Squares},\n  year = {2018},\n  pages = {613-617},\n  abstract = {In this paper we present a modification of alternating least squares (ALS) for tensor canonical polyadic approximation that takes into account mutual coherence constraints. The proposed algorithm can be used to ensure well-posedness of the tensor approximation problem during ALS iterates and so is an alternative to existing approaches. We conduct tests with the proposed approach by using it as initialization of unconstrained alternating least squares in difficult cases, when the underlying tensor model factors have nearly collinear columns and the unconstrained approach is prone to a degenerate behavior, failing to converge or converging slowly to an acceptable solution. The results of the tested cases indicate that by using such an initialization the unconstrained approach seems to avoid such a behavior.},\n  keywords = {least squares approximations;matrix decomposition;tensors;unconstrained alternating least squares;collinear columns;coherence constrained alternating least;tensor canonical polyadic approximation;account mutual coherence constraints;tensor approximation problem;ALS;tensor model factors;Tensile stress;Coherence;Approximation algorithms;Europe;Signal processing algorithms;Matrix decomposition;Optimization},\n  doi = {10.23919/EUSIPCO.2018.8553084},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570428093.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a modification of alternating least squares (ALS) for tensor canonical polyadic approximation that takes into account mutual coherence constraints. The proposed algorithm can be used to ensure well-posedness of the tensor approximation problem during ALS iterates and so is an alternative to existing approaches. We conduct tests with the proposed approach by using it as initialization of unconstrained alternating least squares in difficult cases, when the underlying tensor model factors have nearly collinear columns and the unconstrained approach is prone to a degenerate behavior, failing to converge or converging slowly to an acceptable solution. The results of the tested cases indicate that by using such an initialization the unconstrained approach seems to avoid such a behavior.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decision Statistics for Noncoherent Signal Detection in Multi-Element Antenna Arrays.\n \n \n \n \n\n\n \n Bolkhovskaya, O.; and Maltsev, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1262-1266, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DecisionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553085,\n  author = {O. Bolkhovskaya and A. Maltsev},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Decision Statistics for Noncoherent Signal Detection in Multi-Element Antenna Arrays},\n  year = {2018},\n  pages = {1262-1266},\n  abstract = {This article gives a detailed analysis of the characteristics of the exact statistic for the optimal noncoherent signal detection in multi-element antenna arrays. This task is important for reliable initial detection of User Equipment (UE) signals coming from an unknown direction in LTE random access procedure. It is shown that application of the exact statistic is complex because the thresholds of the Neyman-Pearson (NP) criterion depend on the SNR. A detailed comparison of the exact and various approximate decision statistics has been carried out. The results show that the choice of the statistic can make noticeable impact on the probability of missed detection for multi-element antenna array. A new combined decision statistic has been proposed, whose characteristics are close to the exact statistic.},\n  keywords = {antenna arrays;approximation theory;array signal processing;decision theory;probability;random processes;signal detection;statistical analysis;decision statistics;user equipment signals;UE signal;Neyman-Pearson criterion;NP criterion;SNR;probability;LTE random access procedure;optimal noncoherent signal detection;multielement antenna array;Antenna arrays;Signal to noise ratio;Array signal processing;Probability;Impedance matching;noncoherent detection;decision statistics;multielement antenna arrays},\n  doi = {10.23919/EUSIPCO.2018.8553085},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434639.pdf},\n}\n\n
\n
\n\n\n
\n This article gives a detailed analysis of the characteristics of the exact statistic for the optimal noncoherent signal detection in multi-element antenna arrays. This task is important for reliable initial detection of User Equipment (UE) signals coming from an unknown direction in LTE random access procedure. It is shown that application of the exact statistic is complex because the thresholds of the Neyman-Pearson (NP) criterion depend on the SNR. A detailed comparison of the exact and various approximate decision statistics has been carried out. The results show that the choice of the statistic can make noticeable impact on the probability of missed detection for multi-element antenna array. A new combined decision statistic has been proposed, whose characteristics are close to the exact statistic.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Proactive Computation Caching Policies For 5G-and-Beyond Mobile Edge Cloud Networks.\n \n \n \n \n\n\n \n di Pietro , N.; and Strinati, E. C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 792-796, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ProactivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553086,\n  author = {N. {di Pietro} and E. C. Strinati},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Proactive Computation Caching Policies For 5G-and-Beyond Mobile Edge Cloud Networks},\n  year = {2018},\n  pages = {792-796},\n  abstract = {Computation caching is a novel strategy to improve the performance and the quality of service of mobile edge cloud networks. It consists in storing in local memories situated at the edge of the network, here mobile access points, the already processed results of computations that users offload to the mobile edge cloud. The goal of this technique is to avoid redundant and repetitive processing of the same tasks, thus streamlining the offloading process and improving the exploitation of both users' and network's resources. In this paper, three different computation caching policies are proposed and evaluated. They are based on three main parameters: the popularity of offloadable tasks, the size of their inputs, and the size of their results. Numerical simulations show that good policies need to take into account these three parameters altogether.},\n  keywords = {5G mobile communication;cloud computing;mobile computing;numerical analysis;quality of service;proactive computation caching policies;mobile access points;redundant processing;repetitive processing;offloading process;5G-and-beyond mobile edge cloud networks;quality of service;numerical simulations;Task analysis;Optimized production technology;Measurement;Cloud computing;Cache memory;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553086},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437878.pdf},\n}\n\n
\n
\n\n\n
\n Computation caching is a novel strategy to improve the performance and the quality of service of mobile edge cloud networks. It consists in storing in local memories situated at the edge of the network, here mobile access points, the already processed results of computations that users offload to the mobile edge cloud. The goal of this technique is to avoid redundant and repetitive processing of the same tasks, thus streamlining the offloading process and improving the exploitation of both users' and network's resources. In this paper, three different computation caching policies are proposed and evaluated. They are based on three main parameters: the popularity of offloadable tasks, the size of their inputs, and the size of their results. Numerical simulations show that good policies need to take into account these three parameters altogether.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Face Frontalization for Cross-Pose Facial Expression Recognition.\n \n \n \n \n\n\n \n Engin, D.; Ecabert, C.; Ekenel, H. K.; and Thiran, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1795-1799, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FacePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553087,\n  author = {D. Engin and C. Ecabert and H. K. Ekenel and J. Thiran},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Face Frontalization for Cross-Pose Facial Expression Recognition},\n  year = {2018},\n  pages = {1795-1799},\n  abstract = {In this paper, we have explored the effect of pose normalization for cross-pose facial expression recognition. We have first presented an expression preserving face frontalization method. After face frontalization step, for facial expression representation and classification, we have employed both a traditional approach, by using hand-crafted features, namely local binary patterns, in combination with support vector machine classification and a relatively more recent approach based on convolutional neural networks. To evaluate the impact of face frontalization on facial expression recognition performance, we have conducted cross-pose, subject-independent expression recognition experiments using the BU3DFE database. Experimental results show that pose normalization improves the performance for cross-pose facial expression recognition. Especially, when local binary patterns in combination with support vector machine classifier is used, since this facial expression representation and classification does not handle pose variations, the obtained performance increase is significant. Convolutional neural networks-based approach is found to be more successful handling pose variations, when it is fine-tuned on a dataset that contains face images with varying pose angles. Its performance is further enhanced by benefiting from face frontalization.},\n  keywords = {face recognition;feature extraction;feedforward neural nets;image classification;image representation;learning (artificial intelligence);pose estimation;support vector machines;cross-pose facial expression recognition;pose normalization;expression preserving face;facial expression representation;local binary patterns;face frontalization;facial expression recognition performance;subject-independent expression recognition experiments;BU3DFE database;Face;Face recognition;Databases;Three-dimensional displays;Support vector machines;Feature extraction;Solid modeling;Expression preserving face frontalization;cross-pose facial expression recognition;convolutional neural networks},\n  doi = {10.23919/EUSIPCO.2018.8553087},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437822.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we have explored the effect of pose normalization for cross-pose facial expression recognition. We have first presented an expression preserving face frontalization method. After face frontalization step, for facial expression representation and classification, we have employed both a traditional approach, by using hand-crafted features, namely local binary patterns, in combination with support vector machine classification and a relatively more recent approach based on convolutional neural networks. To evaluate the impact of face frontalization on facial expression recognition performance, we have conducted cross-pose, subject-independent expression recognition experiments using the BU3DFE database. Experimental results show that pose normalization improves the performance for cross-pose facial expression recognition. Especially, when local binary patterns in combination with support vector machine classifier is used, since this facial expression representation and classification does not handle pose variations, the obtained performance increase is significant. Convolutional neural networks-based approach is found to be more successful handling pose variations, when it is fine-tuned on a dataset that contains face images with varying pose angles. Its performance is further enhanced by benefiting from face frontalization.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Extreme Learning Machine for Graph Signal Processing.\n \n \n \n \n\n\n \n Venkitararnan, A.; Chatterjee, S.; and Händel, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 136-140, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ExtremePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553088,\n  author = {A. Venkitararnan and S. Chatterjee and P. Händel},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Extreme Learning Machine for Graph Signal Processing},\n  year = {2018},\n  pages = {136-140},\n  abstract = {In this article, we improve extreme learning machines for regression tasks using a graph signal processing based regularization. We assume that the target signal for prediction or regression is a graph signal. With this assumption, we use the regularization to enforce that the output of an extreme learning machine is smooth over a given graph. Simulation results with real data confirm that such regularization helps significantly when the available training data is limited in size and corrupted by noise.},\n  keywords = {graph theory;learning (artificial intelligence);regression analysis;signal denoising;smoothing methods;extreme learning machine;graph signal processing;regression tasks;Signal processing;Neurons;Training;Europe;Training data;Smoothing methods;Task analysis},\n  doi = {10.23919/EUSIPCO.2018.8553088},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437807.pdf},\n}\n\n
\n
\n\n\n
\n In this article, we improve extreme learning machines for regression tasks using a graph signal processing based regularization. We assume that the target signal for prediction or regression is a graph signal. With this assumption, we use the regularization to enforce that the output of an extreme learning machine is smooth over a given graph. Simulation results with real data confirm that such regularization helps significantly when the available training data is limited in size and corrupted by noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Hierarchic Conv Nets Framework for Rare Sound Event Detection.\n \n \n \n\n\n \n Vesperini, F.; Droghini, D.; Principi, E.; Gabrielli, L.; and Squartini, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1497-1501, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553089,\n  author = {F. Vesperini and D. Droghini and E. Principi and L. Gabrielli and S. Squartini},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Hierarchic Conv Nets Framework for Rare Sound Event Detection},\n  year = {2018},\n  pages = {1497-1501},\n  abstract = {In this paper, we propose a system for rare sound event detection using a hierarchical and multi-scaled approach based on Convolutional Neural Networks (CNN). The task consists on detection of event onsets from artificially generated mixtures. Spectral features are extracted from frames of the acoustic signals, then a first event detection stage operates as binary classifier at frame-rate and it proposes to the second stage contiguous blocks of frames which are assumed to contain a sound event. The second stage refines the event detection of the prior network, discarding blocks that contain background sounds wrongly classified by the first stage. Finally, the effective onset time of the active event is obtained. The performance of the algorithm has been assessed with the material provided for the second task of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) 2017. The achieved overall error rate and F-measure, resulting respectively equal to 0.22 and 88.50 % on the evaluation dataset, significantly outperforms the challenge baseline and the system guarantees improved generalization performance with a reduced number of free network parameters w.r.t. other competitive algorithms.},\n  keywords = {acoustic signal processing;audio signal processing;feature extraction;learning (artificial intelligence);neural nets;pattern classification;signal classification;support vector machines;background sounds;active event;hierarchic convNets framework;rare sound event detection;event onsets;event detection stage;stage contiguous blocks;convolutional neural networks;Event detection;Training;Feature extraction;Acoustics;Task analysis;Signal processing algorithms;Discrete wavelet transforms;Convolutional Neural Network;Sound Event Detection;DCASE2017;Linear Prediction;Discrete wavelet Transform},\n  doi = {10.23919/EUSIPCO.2018.8553089},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a system for rare sound event detection using a hierarchical and multi-scaled approach based on Convolutional Neural Networks (CNN). The task consists on detection of event onsets from artificially generated mixtures. Spectral features are extracted from frames of the acoustic signals, then a first event detection stage operates as binary classifier at frame-rate and it proposes to the second stage contiguous blocks of frames which are assumed to contain a sound event. The second stage refines the event detection of the prior network, discarding blocks that contain background sounds wrongly classified by the first stage. Finally, the effective onset time of the active event is obtained. The performance of the algorithm has been assessed with the material provided for the second task of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE) 2017. The achieved overall error rate and F-measure, resulting respectively equal to 0.22 and 88.50 % on the evaluation dataset, significantly outperforms the challenge baseline and the system guarantees improved generalization performance with a reduced number of free network parameters w.r.t. other competitive algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of Bandlimited Signals on Graphs From Single Bit Recordings of Noisy Samples.\n \n \n \n \n\n\n \n Goyal, M.; and Kumar, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 902-906, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553090,\n  author = {M. Goyal and A. Kumar},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of Bandlimited Signals on Graphs From Single Bit Recordings of Noisy Samples},\n  year = {2018},\n  pages = {902-906},\n  abstract = {Recently, there has been interest in graph signal processing. We consider the problem of a bandlimited graph signal estimation (denoising) from single-bit samples obtained at each graph node. The samples before quantization are affected by zero-mean additive white Gaussian noise of known variance. Using Banach's contraction mapping theorem on complete metric spaces, we develop a recursive algorithm for bandlimited graph signal estimation. For our recursive algorithm, we show that the expected mean-squared error between the graph signal and its estimate is proportional to the bandwidth of the signal and inversely proportional to the size of the graph. We also consider the problem of choosing the nodes to sample based on the properties of graph Laplacian eigenvectors to minimize the mean-squared error of the estimate. Numerical tests with synthetic signals demonstrate the effectiveness of our estimation algorithm for Erdös-Rényi (ER) graphs, Barabäsi-Albert (BA) graphs, and Minnesota road-network graph.},\n  keywords = {AWGN;bandlimited signals;complex networks;eigenvalues and eigenfunctions;graph theory;mean square error methods;signal processing;signal reconstruction;signal sampling;estimation algorithm;Erdös-Rényi graphs;Barabäsi-Albert graphs;Minnesota road-network graph;bandlimited signals;single bit recordings;noisy samples;graph signal processing;bandlimited graph signal estimation;single-bit samples;graph node;recursive algorithm;expected mean-squared error;graph Laplacian eigenvectors;synthetic signals;Estimation;AWGN;Europe;Quantization (signal);Measurement;Signal processing algorithms;graph signal processing;quantization;estimation;sampling},\n  doi = {10.23919/EUSIPCO.2018.8553090},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437984.pdf},\n}\n\n
\n
\n\n\n
\n Recently, there has been interest in graph signal processing. We consider the problem of a bandlimited graph signal estimation (denoising) from single-bit samples obtained at each graph node. The samples before quantization are affected by zero-mean additive white Gaussian noise of known variance. Using Banach's contraction mapping theorem on complete metric spaces, we develop a recursive algorithm for bandlimited graph signal estimation. For our recursive algorithm, we show that the expected mean-squared error between the graph signal and its estimate is proportional to the bandwidth of the signal and inversely proportional to the size of the graph. We also consider the problem of choosing the nodes to sample based on the properties of graph Laplacian eigenvectors to minimize the mean-squared error of the estimate. Numerical tests with synthetic signals demonstrate the effectiveness of our estimation algorithm for Erdös-Rényi (ER) graphs, Barabäsi-Albert (BA) graphs, and Minnesota road-network graph.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decisions Under Binary Messaging Over Adaptive Networks.\n \n \n \n \n\n\n \n Marano, S.; and Sayed, A. H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 410-414, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DecisionsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553091,\n  author = {S. Marano and A. H. Sayed},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Decisions Under Binary Messaging Over Adaptive Networks},\n  year = {2018},\n  pages = {410-414},\n  abstract = {We consider an adaptive network made of interconnected agents engaged in a binary decision task. It is assumed that the agents cannot deliver full-precision messages to their neighbors, but only binary messages. For this scenario, a modified version of the ATC diffusion rule for the agent state evolution is proposed with improved decision performance under adaptive learning scenarios. An approximate analytical characterization of the agents' state is derived, giving insight into the network behavior at steady-state and enabling numerical computation of the decision performance. Computer experiments show that the analytical characterization is accurate for a wide range of the parameters of interest.},\n  keywords = {decision making;learning (artificial intelligence);multi-agent systems;network theory (graphs);binary messaging;adaptive network;interconnected agents;binary decision task;full-precision messages;binary messages;ATC diffusion rule;agent state evolution;improved decision performance;adaptive learning scenarios;approximate analytical characterization;network behavior;steady-state;enabling numerical computation;Random variables;Adaptive systems;Europe;Steady-state;Quantization (signal);Convolution},\n  doi = {10.23919/EUSIPCO.2018.8553091},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434095.pdf},\n}\n\n
\n
\n\n\n
\n We consider an adaptive network made of interconnected agents engaged in a binary decision task. It is assumed that the agents cannot deliver full-precision messages to their neighbors, but only binary messages. For this scenario, a modified version of the ATC diffusion rule for the agent state evolution is proposed with improved decision performance under adaptive learning scenarios. An approximate analytical characterization of the agents' state is derived, giving insight into the network behavior at steady-state and enabling numerical computation of the decision performance. Computer experiments show that the analytical characterization is accurate for a wide range of the parameters of interest.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Complexity RLS Algorithms for the Identification of Bilinear Forms.\n \n \n \n \n\n\n \n Elisei-Iliescu, C.; Stanciu, C.; Paleologu, C.; Anghel, C.; Ciochină, S.; and Benesty, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 455-459, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Low-ComplexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553092,\n  author = {C. Elisei-Iliescu and C. Stanciu and C. Paleologu and C. Anghel and S. Ciochină and J. Benesty},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Complexity RLS Algorithms for the Identification of Bilinear Forms},\n  year = {2018},\n  pages = {455-459},\n  abstract = {The identification of bilinear forms is a challenging problem since its parameter space may be very large and the adaptive filters should be able to cope with this aspect. Recently, the recursive least-squares tailored for bilinear forms (namely RLS-BF) was developed in this context. In order to reduce its computational complexity, two versions based on the dichotomous coordinate descent (DCD) method are proposed in this paper. Simulation results indicate the good performance of these algorithms, with appealing features for practical implementations.},\n  keywords = {adaptive filters;computational complexity;filtering theory;least squares approximations;recursive estimation;low-complexity RLS algorithms;bilinear forms;parameter space;adaptive filters;RLS-BF;computational complexity;recursive least-squares;dichotomous coordinate descent method;DCD method;Mathematical model;Europe;Signal processing algorithms;Computational complexity;Signal processing;Correlation;Adaptive filter;recursive least-squares algorithm;dichotomous coordinate descent;bilinear forms;system identification;multiple-input/single-output system},\n  doi = {10.23919/EUSIPCO.2018.8553092},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436036.pdf},\n}\n\n
\n
\n\n\n
\n The identification of bilinear forms is a challenging problem since its parameter space may be very large and the adaptive filters should be able to cope with this aspect. Recently, the recursive least-squares tailored for bilinear forms (namely RLS-BF) was developed in this context. In order to reduce its computational complexity, two versions based on the dichotomous coordinate descent (DCD) method are proposed in this paper. Simulation results indicate the good performance of these algorithms, with appealing features for practical implementations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Extended Kalman Filter for RTF Estimation in Dual-Microphone Smartphones.\n \n \n \n \n\n\n \n Martín-Doñas, J. M.; López-Espejo, I.; Gomez, A. M.; and Peinado, A. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2474-2478, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553093,\n  author = {J. M. Martín-Doñas and I. López-Espejo and A. M. Gomez and A. M. Peinado},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Extended Kalman Filter for RTF Estimation in Dual-Microphone Smartphones},\n  year = {2018},\n  pages = {2474-2478},\n  abstract = {The performance of speech beamformers relies on a good estimation of the relative transfer function (RTF) between the captured clean speech at each microphone. Most of the proposed RTF estimators make assumptions about the clean speech statistics or need a joint estimation of the RTF and the signal statistics. In this work we propose a minimum mean square error (MMSE) estimation of the RTF in an extended Kalman filter (eKF) framework. Our method exploits the knowledge about the RTF and noise statistics with no assumptions about the clean speech statistics. The proposed approach is evaluated when employed in combination with minimum variance distortionless response (MVDR) beamforming in a dual-microphone smartphone. To this end, a database of simulated dual-channel noisy speech recordings on a smartphone was used. Experimental results show that our approach achieves the most accurate RTF estimates among the evaluated methods, yielding less speech distortion and better intelligibility while competitive perceptual quality performance is obtained.},\n  keywords = {array signal processing;Kalman filters;least mean squares methods;microphone arrays;nonlinear filters;smart phones;speech processing;statistical analysis;wireless channels;signal statistics;minimum mean square error estimation;extended Kalman filter framework;noise statistics;dual-microphone smartphone;simulated dual-channel noisy speech recordings;speech distortion;RTF estimation;speech beamformers;relative transfer function estimation;minimum variance distortionless response beamforming;speech statistics;MMSE estimation;eKF framework;MVDR beamforming;perceptual quality performance;Estimation;Microphones;Smart phones;Kalman filters;Signal processing algorithms;Noise measurement;Atmospheric modeling;Relative Transfer Function;Extended Kalman Filter;Beamforming;Dual-microphone;Smartphone},\n  doi = {10.23919/EUSIPCO.2018.8553093},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437121.pdf},\n}\n\n
\n
\n\n\n
\n The performance of speech beamformers relies on a good estimation of the relative transfer function (RTF) between the captured clean speech at each microphone. Most of the proposed RTF estimators make assumptions about the clean speech statistics or need a joint estimation of the RTF and the signal statistics. In this work we propose a minimum mean square error (MMSE) estimation of the RTF in an extended Kalman filter (eKF) framework. Our method exploits the knowledge about the RTF and noise statistics with no assumptions about the clean speech statistics. The proposed approach is evaluated when employed in combination with minimum variance distortionless response (MVDR) beamforming in a dual-microphone smartphone. To this end, a database of simulated dual-channel noisy speech recordings on a smartphone was used. Experimental results show that our approach achieves the most accurate RTF estimates among the evaluated methods, yielding less speech distortion and better intelligibility while competitive perceptual quality performance is obtained.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Recursive Expectation-Maximization Algorithm for Online Multi-Microphone Noise Reduction.\n \n \n \n\n\n \n Schwartz, O.; and Gannot, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1542-1546, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553094,\n  author = {O. Schwartz and S. Gannot},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Recursive Expectation-Maximization Algorithm for Online Multi-Microphone Noise Reduction},\n  year = {2018},\n  pages = {1542-1546},\n  abstract = {Speech signals, captured by a microphone array mounted to a smart loudspeaker device, can be contaminated by ambient noise. In this paper, we present an online multichannel algorithm, based on the recursive EM (REM) procedure, to suppress ambient noise and enhance the speech signal. In the E-step of the proposed algorithm, a multichannel Wiener filter (MCWF) is applied to enhance the speech signal. The MCWF parameters, that is, the power spectral density (PSD) of the anechoic speech, the steering vector, and the PSD matrix of the noise, are estimated in the M-step. The proposed algorithm is specifically suitable for online applications since it uses only past and current observations and requires no iterations. To evaluate the proposed algorithm we used two sets of measurements. In the first set, static scenarios were generated by convolving speech utterances with real room impulse responses (RIRs) recorded in our acoustic lab with reverberation time set to 0.16 s and several signal to directional noise ratio (SDNR) levels. The second set was used to evaluate dynamic scenarios by using real recordings acquired by CEVA “smart and connected” development platform. Two practical use cases were evaluated: 1) estimating the steering vector with a known noise PSD matrix and 2) estimating the noise PSD matrix with a known steering vector. In both use cases, the proposed algorithm outperforms baseline multichannel denoising algorithms.},\n  keywords = {expectation-maximisation algorithm;loudspeakers;microphone arrays;recursive estimation;reverberation;signal denoising;speech enhancement;transient response;Wiener filters;recursive expectation-maximization algorithm;online multimicrophone noise reduction;speech signal;microphone array;smart loudspeaker device;ambient noise;online multichannel algorithm;recursive EM procedure;multichannel Wiener filter;MCWF parameters;power spectral density;anechoic speech;online applications;speech utterances;reverberation time set;baseline multichannel denoising algorithms;noise PSD matrix;steering vector;CEVA smart and connected development platform;signal to directional noise ratio levels;room impulse responses;time 0.16 s;Signal processing algorithms;Microphones;Noise reduction;Direction-of-arrival estimation;Speech enhancement;Estimation;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553094},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Speech signals, captured by a microphone array mounted to a smart loudspeaker device, can be contaminated by ambient noise. In this paper, we present an online multichannel algorithm, based on the recursive EM (REM) procedure, to suppress ambient noise and enhance the speech signal. In the E-step of the proposed algorithm, a multichannel Wiener filter (MCWF) is applied to enhance the speech signal. The MCWF parameters, that is, the power spectral density (PSD) of the anechoic speech, the steering vector, and the PSD matrix of the noise, are estimated in the M-step. The proposed algorithm is specifically suitable for online applications since it uses only past and current observations and requires no iterations. To evaluate the proposed algorithm we used two sets of measurements. In the first set, static scenarios were generated by convolving speech utterances with real room impulse responses (RIRs) recorded in our acoustic lab with reverberation time set to 0.16 s and several signal to directional noise ratio (SDNR) levels. The second set was used to evaluate dynamic scenarios by using real recordings acquired by CEVA “smart and connected” development platform. Two practical use cases were evaluated: 1) estimating the steering vector with a known noise PSD matrix and 2) estimating the noise PSD matrix with a known steering vector. In both use cases, the proposed algorithm outperforms baseline multichannel denoising algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Approach to Sample Entropy of Multi-channel Signals: Application to EEG Signals.\n \n \n \n \n\n\n \n El Sayed Hussein Jomaa, M.; Van Bogaert, P.; Jrad, N.; Colominas, M. A.; and Humeau-Heurtier, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1945-1949, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553095,\n  author = {M. {El Sayed Hussein Jomaa} and P. {Van Bogaert} and N. Jrad and M. A. Colominas and A. Humeau-Heurtier},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Approach to Sample Entropy of Multi-channel Signals: Application to EEG Signals},\n  year = {2018},\n  pages = {1945-1949},\n  abstract = {In this paper, we propose a new algorithm to calculate sample entropy of multivariate data. Over the existing method, the one proposed here has the advantage of maintaining good results as the number of channels increases. The new and already-existing algorithms were applied on multivariate white Gaussian noise signals, pink noise signals, and mixtures of both. For high number of channels, the existing method failed to show that white noise is always the most irregular whereas the proposed method always had the entropy of white noise the highest. Application of both algorithms on MIX process signals also confirmed the ability of the proposed method to handle larger number of channels without risking erroneous results. We also applied the proposed algorithm on EEG data from epileptic patients before and after treatments. The results showed an increase in entropy values after treatment in the regions where the focus was localized. This goes in the same way as the medical point of view that indicated a better health state for these patients.},\n  keywords = {electroencephalography;entropy;Gaussian noise;health care;medical signal processing;patient treatment;white noise;sample entropy;multichannel signals;EEG signals;pink noise signals;white gaussian noise signals;epileptic patients;medical point;patient health state;Entropy;1/f noise;Electroencephalography;Time series analysis;Epilepsy;Voltage control;Europe;sample entropy;multivariate;complexity;EEG;epilepsy},\n  doi = {10.23919/EUSIPCO.2018.8553095},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437339.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a new algorithm to calculate sample entropy of multivariate data. Over the existing method, the one proposed here has the advantage of maintaining good results as the number of channels increases. The new and already-existing algorithms were applied on multivariate white Gaussian noise signals, pink noise signals, and mixtures of both. For high number of channels, the existing method failed to show that white noise is always the most irregular whereas the proposed method always had the entropy of white noise the highest. Application of both algorithms on MIX process signals also confirmed the ability of the proposed method to handle larger number of channels without risking erroneous results. We also applied the proposed algorithm on EEG data from epileptic patients before and after treatments. The results showed an increase in entropy values after treatment in the regions where the focus was localized. This goes in the same way as the medical point of view that indicated a better health state for these patients.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Convolutional Neural Networks for Chaos Identification in Signal Processing.\n \n \n \n \n\n\n \n Makarenko, A. V.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1467-1471, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553098,\n  author = {A. V. Makarenko},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Convolutional Neural Networks for Chaos Identification in Signal Processing},\n  year = {2018},\n  pages = {1467-1471},\n  abstract = {This paper demonstrates effective capabilities of a relatively simple deep convolutional neural network in estimating the Lyapunov exponent and detecting chaotic signals. A major difference between this study and existing research is that our networks take raw data as input, automatically generate a selection of informative features, make a direct estimation of the Lyapunov exponent and form a decision whether a chaotic signal is present. The proposed method does not require attractor reconstruction. It also can be used for processing relatively short signals - in the experiment described here the signal length is 1024 sequence elements. The study has demonstrated that deep convolutional neural networks are effective in applications involving chaotic signals (down to narrowband or broadband stochastic processes), as well as distinct patterns, and can, therefore, be used for a number of signal processing tasks.},\n  keywords = {chaos;Lyapunov methods;neural nets;signal processing;stochastic processes;deep convolutional neural networks;chaos identification;effective capabilities;relatively simple deep convolutional neural network;Lyapunov exponent;detecting chaotic signals;chaotic signal;relatively short signals;signal length;signal processing tasks;Chaotic communication;Convolutional neural networks;Estimation;Convolution;Training;deep convolutional neural networks;chaos identification;Lyapunov exponent;time series;logistic map},\n  doi = {10.23919/EUSIPCO.2018.8553098},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436762.pdf},\n}\n\n
\n
\n\n\n
\n This paper demonstrates effective capabilities of a relatively simple deep convolutional neural network in estimating the Lyapunov exponent and detecting chaotic signals. A major difference between this study and existing research is that our networks take raw data as input, automatically generate a selection of informative features, make a direct estimation of the Lyapunov exponent and form a decision whether a chaotic signal is present. The proposed method does not require attractor reconstruction. It also can be used for processing relatively short signals - in the experiment described here the signal length is 1024 sequence elements. The study has demonstrated that deep convolutional neural networks are effective in applications involving chaotic signals (down to narrowband or broadband stochastic processes), as well as distinct patterns, and can, therefore, be used for a number of signal processing tasks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convolutional Neural Networks Without Any Checkerboard Artifacts.\n \n \n \n \n\n\n \n Sugawara, Y.; Shiota, S.; and Kiya, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1317-1321, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ConvolutionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553099,\n  author = {Y. Sugawara and S. Shiota and H. Kiya},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Convolutional Neural Networks Without Any Checkerboard Artifacts},\n  year = {2018},\n  pages = {1317-1321},\n  abstract = {It is well-known that a number of convolutional neural networks (CNNs) generate checkerboard artifacts in both of two processes: forward-propagation of upsampling layers and backpropagation of convolutional layers. A condition to avoid the checkerboard artifacts is proposed in this paper. So far, checkerboard artifacts have been mainly studied for linear multirate systems, but the condition to avoid checkerboard artifacts can not be applied to CNNs due to the non-linearity of CNNs. We extend the avoiding condition for CNNs, and apply the proposed structure to some typical CNNs to confirm the effectiveness of the new scheme. Experiment results demonstrate that the proposed structure can perfectly avoid to generate checkerboard artifacts, while keeping excellent properties that the CNNs have.},\n  keywords = {backpropagation;feedforward neural nets;convolutional neural networks;checkerboard artifacts;linear multirate systems;upsampling layer forward-propagation;CNN;Convolution;Deconvolution;Image resolution;Linear systems;Signal resolution;Steady-state;Europe;Convolutional Neural Networks;Checkerboard Artifacts},\n  doi = {10.23919/EUSIPCO.2018.8553099},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436576.pdf},\n}\n\n
\n
\n\n\n
\n It is well-known that a number of convolutional neural networks (CNNs) generate checkerboard artifacts in both of two processes: forward-propagation of upsampling layers and backpropagation of convolutional layers. A condition to avoid the checkerboard artifacts is proposed in this paper. So far, checkerboard artifacts have been mainly studied for linear multirate systems, but the condition to avoid checkerboard artifacts can not be applied to CNNs due to the non-linearity of CNNs. We extend the avoiding condition for CNNs, and apply the proposed structure to some typical CNNs to confirm the effectiveness of the new scheme. Experiment results demonstrate that the proposed structure can perfectly avoid to generate checkerboard artifacts, while keeping excellent properties that the CNNs have.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Variance-Reduced Learning Over Multi-Agent Networks.\n \n \n \n \n\n\n \n Yuan, K.; Ying, B.; and Sayed, A. H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 415-419, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553100,\n  author = {K. Yuan and B. Ying and A. H. Sayed},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Variance-Reduced Learning Over Multi-Agent Networks},\n  year = {2018},\n  pages = {415-419},\n  abstract = {This work develops a fully decentralized variance-reduced learning algorithm for multi-agent networks where nodes store and process the data locally and are only allowed to communicate with their immediate neighbors. In the proposed algorithm, there is no need for a central or master unit while the objective is to enable the dispersed nodes to learn the exact global model despite their limited localized interactions. The resulting algorithm is shown to have low memory requirement, guaranteed linear convergence, robustness to failure of links or nodes and scalability to the network size. Moreover, the decentralized nature of the solution makes large-scale machine learning problems more tractable and also scalable since data is stored and processed locally at the nodes.},\n  keywords = {learning (artificial intelligence);minimisation;multi-agent systems;multiagent networks;fully decentralized variance-reduced learning algorithm;nodes store;immediate neighbors;central master unit;dispersed nodes;exact global model;localized interactions;low memory requirement;network size;decentralized nature;linear convergence;large-scale machine learning problems;Signal processing algorithms;Convergence;Memory management;Europe;Signal processing;Indexes;Optimization;diffusion strategy;variance-reduction;stochastic gradient descent;memory efficiency;SVRG;SAGA;AVRG},\n  doi = {10.23919/EUSIPCO.2018.8553100},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435623.pdf},\n}\n\n
\n
\n\n\n
\n This work develops a fully decentralized variance-reduced learning algorithm for multi-agent networks where nodes store and process the data locally and are only allowed to communicate with their immediate neighbors. In the proposed algorithm, there is no need for a central or master unit while the objective is to enable the dispersed nodes to learn the exact global model despite their limited localized interactions. The resulting algorithm is shown to have low memory requirement, guaranteed linear convergence, robustness to failure of links or nodes and scalability to the network size. Moreover, the decentralized nature of the solution makes large-scale machine learning problems more tractable and also scalable since data is stored and processed locally at the nodes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Noisy Encrypted Image Correction based on Shannon Entropy Measurement in Pixel Blocks of Very Small Size.\n \n \n \n \n\n\n \n Puteaux, P.; and Puech, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 161-165, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NoisyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553101,\n  author = {P. Puteaux and W. Puech},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Noisy Encrypted Image Correction based on Shannon Entropy Measurement in Pixel Blocks of Very Small Size},\n  year = {2018},\n  pages = {161-165},\n  abstract = {Many techniques have been presented to protect image content confidentiality. The owner of an image encrypts it using a key and transmits the encrypted image across a network. If the recipient is authorized to access the original content of the image, he can reconstruct it losslessly. However, if during the transmission the encrypted image is noised, some parts of the image can not be deciphered. In order to localize and correct these errors, we propose an approach based on the local Shannon entropy measurement. We first analyze this measure as a function of the block-size. We provide then a full description of our blind error localization and removal process. Experimental results show that the proposed approach, based on local entropy, can be used in practice to correct noisy encrypted images, even with blocks of very small size.},\n  keywords = {cryptography;entropy;image processing;local Shannon entropy measurement;image content confidentiality;pixel blocks;noisy encrypted image;local entropy;blind error localization;block-size;Entropy;Noise measurement;Encryption;Image reconstruction;Forward error correction;Europe;Image encryption;image denoising;statistical analysis;multimedia security},\n  doi = {10.23919/EUSIPCO.2018.8553101},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439067.pdf},\n}\n\n
\n
\n\n\n
\n Many techniques have been presented to protect image content confidentiality. The owner of an image encrypts it using a key and transmits the encrypted image across a network. If the recipient is authorized to access the original content of the image, he can reconstruct it losslessly. However, if during the transmission the encrypted image is noised, some parts of the image can not be deciphered. In order to localize and correct these errors, we propose an approach based on the local Shannon entropy measurement. We first analyze this measure as a function of the block-size. We provide then a full description of our blind error localization and removal process. Experimental results show that the proposed approach, based on local entropy, can be used in practice to correct noisy encrypted images, even with blocks of very small size.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance of Nested vs. Non-Nested SVM Cross-Validation Methods in Visual BCI: Validation Study.\n \n \n \n \n\n\n \n Abdulaal, M. J.; Casson, A. J.; and Gaydecki, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1680-1684, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553102,\n  author = {M. J. Abdulaal and A. J. Casson and P. Gaydecki},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance of Nested vs. Non-Nested SVM Cross-Validation Methods in Visual BCI: Validation Study},\n  year = {2018},\n  pages = {1680-1684},\n  abstract = {Brain-Computer Interface (BCI) is a technology that utilizes brainwaves to link the brain with external machines for either medical analysis, or to improve quality of life such as control and communication for people affected with paralysis. The performance of BCI systems depends on classification accuracy, which influences the Information Transfer Rate. This motivates researchers to improve their classification accuracy as best possible. A bias problem in reporting accuracies by using non-nested cross-validation methods was thought to increase accuracy. The aim of this paper was to validate and quantify such a concept by using a low-cost commercial EEG recorder to classify visually evoking face vs scrambled pictures, and report high accuracy using non-nested cross validation. The algorithm employed Independent Component Analysis followed by feature extraction with sample covariance matrices. The data were then classified using Support Vector Machines. The accuracy was tested with nested and non-nested cross-validation methods; accuracies obtained were 63% and 76%, respectively.},\n  keywords = {brain-computer interfaces;covariance matrices;electroencephalography;feature extraction;independent component analysis;medical signal processing;pattern classification;support vector machines;brain-computer interface;information transfer rate;scrambled pictures;independent component analysis;nested SVM cross-validation methods;brainwaves;paralysis;feature extraction;covariance matrices;support vector machines;visually evoking face classification;low-cost commercial EEG recorder;medical analysis;external machines;visual BCI;nonnested SVM cross-validation methods;Electroencephalography;Electrodes;Covariance matrices;Software;Face;Testing;Support vector machines},\n  doi = {10.23919/EUSIPCO.2018.8553102},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437154.pdf},\n}\n\n
\n
\n\n\n
\n Brain-Computer Interface (BCI) is a technology that utilizes brainwaves to link the brain with external machines for either medical analysis, or to improve quality of life such as control and communication for people affected with paralysis. The performance of BCI systems depends on classification accuracy, which influences the Information Transfer Rate. This motivates researchers to improve their classification accuracy as best possible. A bias problem in reporting accuracies by using non-nested cross-validation methods was thought to increase accuracy. The aim of this paper was to validate and quantify such a concept by using a low-cost commercial EEG recorder to classify visually evoking face vs scrambled pictures, and report high accuracy using non-nested cross validation. The algorithm employed Independent Component Analysis followed by feature extraction with sample covariance matrices. The data were then classified using Support Vector Machines. The accuracy was tested with nested and non-nested cross-validation methods; accuracies obtained were 63% and 76%, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Acoustic Beamforming in Front of a Reflective Plane.\n \n \n \n \n\n\n \n Stefanakis, N.; Delikaris-Manias, S.; and Mouchtaris, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 26-30, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AcousticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553103,\n  author = {N. Stefanakis and S. Delikaris-Manias and A. Mouchtaris},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Acoustic Beamforming in Front of a Reflective Plane},\n  year = {2018},\n  pages = {26-30},\n  abstract = {In this paper, we consider the problem of beamforming with a planar microphone array placed in front of a wall of the room, so that the microphone array plane is perpendicular to that of the wall. While this situation is very likely to occur in a real life problem, the reflections introduced by the adjacent wall can be the cause of a serious mismatch between the actual acoustic paths and the traditionally employed free-field propagation model. We present an adaptation from the free-field to the so-called reflection-aware propagation model, that exploits an in-situ estimation of the complex and frequency-dependent wall reflectivity. Results presented in a real environment demonstrate that the proposed approach may bring significant improvements to the beamforming process compared to the free-field propagation model, as well as compared to other reflection-aware models that have been recently proposed.},\n  keywords = {acoustic signal processing;array signal processing;estimation theory;microphone arrays;acoustic beamforming;reflective plane;planar microphone array;microphone array plane;adjacent wall;free-field propagation model;reflection-aware propagation model;frequency-dependent wall reflectivity;beamforming process;reflection-aware models;acoustic paths;Array signal processing;Microphone arrays;Acoustics;Estimation;Acoustic arrays;Frequency estimation},\n  doi = {10.23919/EUSIPCO.2018.8553103},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439749.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider the problem of beamforming with a planar microphone array placed in front of a wall of the room, so that the microphone array plane is perpendicular to that of the wall. While this situation is very likely to occur in a real life problem, the reflections introduced by the adjacent wall can be the cause of a serious mismatch between the actual acoustic paths and the traditionally employed free-field propagation model. We present an adaptation from the free-field to the so-called reflection-aware propagation model, that exploits an in-situ estimation of the complex and frequency-dependent wall reflectivity. Results presented in a real environment demonstrate that the proposed approach may bring significant improvements to the beamforming process compared to the free-field propagation model, as well as compared to other reflection-aware models that have been recently proposed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved ADMM-Based Algorithm for Multi-Group Multicast Beamforming in Large-Scale Antenna Systems.\n \n \n \n \n\n\n \n Demir, Ö. T.; and Tuncer, T. E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 652-656, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553104,\n  author = {Ö. T. Demir and T. E. Tuncer},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved ADMM-Based Algorithm for Multi-Group Multicast Beamforming in Large-Scale Antenna Systems},\n  year = {2018},\n  pages = {652-656},\n  abstract = {In this paper, we consider beamformer design for multi-group multicasting where a common message is transmitted to the users in each group. We propose a novel effective alternating direction method of multipliers (ADMM) formulation in order to reduce the computational complexity of the existing state-of-the-art algorithm for multi-group multicast beamforming with per-antenna power constraints. The proposed approach is advantageous for the scenarios where the dimension of the channel matrix is less than the number of antennas at the base station. This case is always valid when the number of users is less than that of antennas, which is a practical situation in massive-MIMO systems. Simulation results show that the proposed method performs the same with significantly less computational time compared to the benchmark algorithm.},\n  keywords = {antenna arrays;array signal processing;computational complexity;matrix algebra;MIMO communication;multicast communication;multipliers formulation;existing state-of-the-art algorithm;multigroup multicast beamforming;per-antenna power constraints;improved ADMM-based algorithm;large-scale antenna systems;alternating direction method of multipliers;computational complexity;channel matrix;massive-MIMO systems;Multicast algorithms;Signal processing algorithms;Antennas;Array signal processing;Convex functions;Optimization;Interference;Multi-group multicast beamforming;ADMM;large arrays},\n  doi = {10.23919/EUSIPCO.2018.8553104},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439422.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider beamformer design for multi-group multicasting where a common message is transmitted to the users in each group. We propose a novel effective alternating direction method of multipliers (ADMM) formulation in order to reduce the computational complexity of the existing state-of-the-art algorithm for multi-group multicast beamforming with per-antenna power constraints. The proposed approach is advantageous for the scenarios where the dimension of the channel matrix is less than the number of antennas at the base station. This case is always valid when the number of users is less than that of antennas, which is a practical situation in massive-MIMO systems. Simulation results show that the proposed method performs the same with significantly less computational time compared to the benchmark algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Least Squares and Maximum Likelihood Estimation of Mixed Spectra.\n \n \n \n \n\n\n \n Brynolfsson, J.; Swaäd, J.; Jakobsson, A.; and Sandsten, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2345-2349, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LeastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553105,\n  author = {J. Brynolfsson and J. Swaäd and A. Jakobsson and M. Sandsten},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Least Squares and Maximum Likelihood Estimation of Mixed Spectra},\n  year = {2018},\n  pages = {2345-2349},\n  abstract = {In this paper, we propose a novel I-D spectral estimator for signals with mixed spectra. The proposed method is partly based on the recently introduced smooth spectral estimator LIMES, in which the smoothness is accounted for by assuming linearity within predefined segments of the spectrum. The proposed method utilizes this formulation but also allows segments to change size to better estimate the spectrum, thereby allowing for the estimation of spectra that are neither completely smooth or sparse in frequency, but rather contains a mixture of such components. Using simulated data, we illustrate the performance of the proposed estimator, comparing to other recent spectral estimation techniques.},\n  keywords = {least squares approximations;maximum likelihood estimation;spectral analysis;least squares;maximum likelihood estimation;mixed spectra;I-D spectral estimator;spectral estimation techniques;smooth spectral estimator;I;Signal processing algorithms;Maximum likelihood estimation;Frequency estimation;Europe;Fourier transforms;spectral estimation;time-series;LIMES;covariance- fitting},\n  doi = {10.23919/EUSIPCO.2018.8553105},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437353.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel I-D spectral estimator for signals with mixed spectra. The proposed method is partly based on the recently introduced smooth spectral estimator LIMES, in which the smoothness is accounted for by assuming linearity within predefined segments of the spectrum. The proposed method utilizes this formulation but also allows segments to change size to better estimate the spectrum, thereby allowing for the estimation of spectra that are neither completely smooth or sparse in frequency, but rather contains a mixture of such components. Using simulated data, we illustrate the performance of the proposed estimator, comparing to other recent spectral estimation techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Comparison of Audio Signal Preprocessing Methods for Deep Neural Networks on Music Tagging.\n \n \n \n \n\n\n \n Choi, K.; Fazekas, G.; Sandler, M.; and Cho, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1870-1874, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553106,\n  author = {K. Choi and G. Fazekas and M. Sandler and K. Cho},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Comparison of Audio Signal Preprocessing Methods for Deep Neural Networks on Music Tagging},\n  year = {2018},\n  pages = {1870-1874},\n  abstract = {In this paper, we empirically investigate the effect of audio preprocessing on music tagging with deep neural networks. We perform comprehensive experiments involving audio preprocessing using different time-frequency representations, logarithmic magnitude compression, frequency weighting, and scaling. We show that many commonly used input preprocessing techniques are redundant except magnitude compression.},\n  keywords = {music;neural nets;time-frequency analysis;transfer functions;input preprocessing techniques;time-frequency representations;frequency weighting;logarithmic magnitude compression;comprehensive experiments;audio preprocessing;music tagging;deep neural networks;audio signal preprocessing methods;Training;Time-frequency analysis;Neural networks;Kernel;Standards;Europe;Tagging},\n  doi = {10.23919/EUSIPCO.2018.8553106},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434062.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we empirically investigate the effect of audio preprocessing on music tagging with deep neural networks. We perform comprehensive experiments involving audio preprocessing using different time-frequency representations, logarithmic magnitude compression, frequency weighting, and scaling. We show that many commonly used input preprocessing techniques are redundant except magnitude compression.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Band-Sampled-Data collection for the search of continuous gravitational wave signals.\n \n \n \n \n\n\n \n Piccinni, O. J.; and Frasca, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2653-2657, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553107,\n  author = {O. J. Piccinni and S. Frasca},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {The Band-Sampled-Data collection for the search of continuous gravitational wave signals},\n  year = {2018},\n  pages = {2653-2657},\n  abstract = {The detection of gravitational wave signals in the LIGO and Virgo interferometric detector data, is possible thanks to the application and development of several data analysis techniques and to the improvement in the detector sensitivity. A particular type of signal, still undetected by the LIGO-Virgo collaboration, is the so-called continuous wave signal (CW). A CW signal is a quasi-monochromatic signal, deeply buried in the data and modulated by several physical effects. In order to extract the signal from the detector noisy data, several data handling and manipulations should be performed. In particular this type of gravitational wave is recovered using techniques from Fourier analysis and phase demodulations. In this work we will present a new data framework called Band-Sampled-Data (BSD) collection developed for all the searches of CW signals, and the main features of the data used in this context will be described.},\n  keywords = {data handling;Fourier analysis;gravitational wave detectors;gravitational waves;LIGO-Virgo collaboration;continuous wave signal;CW signal;quasimonochromatic signal;detector noisy data;data handling;phase demodulations;data framework;continuous gravitational wave signals;data analysis techniques;band-sampled-data collection;Time-frequency analysis;Detectors;Sensitivity;Data mining;Neutrons;Data analysis;Coherence;Continuous waves;LIGO-Virgo;reduced-analytic signal},\n  doi = {10.23919/EUSIPCO.2018.8553107},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437831.pdf},\n}\n\n
\n
\n\n\n
\n The detection of gravitational wave signals in the LIGO and Virgo interferometric detector data, is possible thanks to the application and development of several data analysis techniques and to the improvement in the detector sensitivity. A particular type of signal, still undetected by the LIGO-Virgo collaboration, is the so-called continuous wave signal (CW). A CW signal is a quasi-monochromatic signal, deeply buried in the data and modulated by several physical effects. In order to extract the signal from the detector noisy data, several data handling and manipulations should be performed. In particular this type of gravitational wave is recovered using techniques from Fourier analysis and phase demodulations. In this work we will present a new data framework called Band-Sampled-Data (BSD) collection developed for all the searches of CW signals, and the main features of the data used in this context will be described.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n GEVD Based Speech and Noise Correlation Matrix Estimation for Multichannel Wiener Filter Based Noise Reduction.\n \n \n \n \n\n\n \n Van Rompaey, R.; and Moonen, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2544-2548, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GEVDPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553109,\n  author = {R. {Van Rompaey} and M. Moonen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {GEVD Based Speech and Noise Correlation Matrix Estimation for Multichannel Wiener Filter Based Noise Reduction},\n  year = {2018},\n  pages = {2544-2548},\n  abstract = {In a single speech source noise reduction scenario, the frequency domain correlation matrix of the speech signal is often assumed to be a rank-l matrix. In multichannel Wiener filter (MWF) based noise reduction, this assumption may be used to define an optimization criterion to estimate the positive definite speech correlation matrix together with the noise correlation matrix, from sample `speech+noise' and `noise-only' correlation matrices. The estimated correlation matrices then define the MWF. In generalized eigenvalue decomposition (GEVD) based MWF, this optimization criterion involves a prewhitening with the sample `noise-only' correlation matrix, which in particular leads to a compact expression for the MWF. However, a more accurate form would include a prewhitening with the estimated noise correlation matrix instead of with the sample `noise-only' correlation matrix. Unfortunately this leads to a more difficult optimization problem, where the prewhitening indeed involves one of the optimization variables. In this paper, it is demonstrated that the modified optimization criterion, remarkably, leads to only minor modifications in the estimated correlation matrices and eventually the same MWF, which justifies the use of the original optimization criterion as a simpler substitute.},\n  keywords = {correlation methods;eigenvalues and eigenfunctions;matrix algebra;optimisation;speech processing;Wiener filters;single speech source noise reduction scenario;frequency domain correlation matrix;speech signal;rank-l matrix;positive definite speech correlation matrix;noise-only correlation matrices;sample noise-only;estimated noise correlation matrix;GEVD;multichannel Wiener filter;generalized eigenvalue decomposition;MWF;speech-noise correlation matrix estimation;noise reduction;Correlation;Optimization;Matrix decomposition;Estimation;Eigenvalues and eigenfunctions;Speech processing;Noise reduction;Noise reduction;speech enhancement;Wiener filter;multichannel Wiener filter (MWF);generalized eigenvalue decomposition (GEVD)},\n  doi = {10.23919/EUSIPCO.2018.8553109},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437344.pdf},\n}\n\n
\n
\n\n\n
\n In a single speech source noise reduction scenario, the frequency domain correlation matrix of the speech signal is often assumed to be a rank-l matrix. In multichannel Wiener filter (MWF) based noise reduction, this assumption may be used to define an optimization criterion to estimate the positive definite speech correlation matrix together with the noise correlation matrix, from sample `speech+noise' and `noise-only' correlation matrices. The estimated correlation matrices then define the MWF. In generalized eigenvalue decomposition (GEVD) based MWF, this optimization criterion involves a prewhitening with the sample `noise-only' correlation matrix, which in particular leads to a compact expression for the MWF. However, a more accurate form would include a prewhitening with the estimated noise correlation matrix instead of with the sample `noise-only' correlation matrix. Unfortunately this leads to a more difficult optimization problem, where the prewhitening indeed involves one of the optimization variables. In this paper, it is demonstrated that the modified optimization criterion, remarkably, leads to only minor modifications in the estimated correlation matrices and eventually the same MWF, which justifies the use of the original optimization criterion as a simpler substitute.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimation of Missing Data in Fetal Heart Rate Signals Using Shift-Invariant Dictionary.\n \n \n \n \n\n\n \n Barzideh, F.; Urdal, J.; Hussein, K.; Engan, K.; Skretting, K.; Mdoe, P.; Kamala, B.; and Brunner, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 762-766, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EstimationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553110,\n  author = {F. Barzideh and J. Urdal and K. Hussein and K. Engan and K. Skretting and P. Mdoe and B. Kamala and S. Brunner},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimation of Missing Data in Fetal Heart Rate Signals Using Shift-Invariant Dictionary},\n  year = {2018},\n  pages = {762-766},\n  abstract = {In 2015, an estimated 1.3 million intrapartum stillbirths occurred, meaning that the fetus died during labour. The majority of these stillbirths occurred in low and middle income countries. With the introduction of affordable continuous fetal heart rate (FHR) monitors for use in these settings, the fetal well-being can be better monitored and health care personnel can potentially intervene at an earlier time if abnormalities in the FHR signal are detected. Additional information about the fetal health can be extracted from the fetal heart rate signals through signal processing and analysis. A challenge is, however, the large number of missing samples in the recorded FHR as fetal and maternal movement in addition to sensor displacement can cause data dropouts. Previously proposed methods perform well on estimation of short dropouts, but struggle with data from wearable devices with longer dropouts. Sparse representation and dictionary learning have been shown to be useful in the related problem of image inpainting. The recently proposed dictionary learning algorithm, SI-FSDL, learns shift-invariant dictionaries with long atoms, which could be beneficial for such time series signals with large dropout gaps. In this paper it is shown that using sparse representation with dictionaries learned by SI-FSDL on the FHR signals with missing samples provides a reconstruction with improved properties compared to previously used techniques.},\n  keywords = {cardiology;health care;learning (artificial intelligence);medical signal processing;obstetrics;patient monitoring;time series;shift-invariant dictionary;time series signals;FHR signal;missing samples;fetal heart rate signals;intrapartum stillbirths;health care personnel;fetal health;signal processing;recorded FHR;data dropouts;fetal heart rate monitors;missing data estimation;dictionary learning algorithm;Dictionaries;Fetal heart rate;Monitoring;Machine learning;Biomedical monitoring;Hospitals;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553110},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437643.pdf},\n}\n\n
\n
\n\n\n
\n In 2015, an estimated 1.3 million intrapartum stillbirths occurred, meaning that the fetus died during labour. The majority of these stillbirths occurred in low and middle income countries. With the introduction of affordable continuous fetal heart rate (FHR) monitors for use in these settings, the fetal well-being can be better monitored and health care personnel can potentially intervene at an earlier time if abnormalities in the FHR signal are detected. Additional information about the fetal health can be extracted from the fetal heart rate signals through signal processing and analysis. A challenge is, however, the large number of missing samples in the recorded FHR as fetal and maternal movement in addition to sensor displacement can cause data dropouts. Previously proposed methods perform well on estimation of short dropouts, but struggle with data from wearable devices with longer dropouts. Sparse representation and dictionary learning have been shown to be useful in the related problem of image inpainting. The recently proposed dictionary learning algorithm, SI-FSDL, learns shift-invariant dictionaries with long atoms, which could be beneficial for such time series signals with large dropout gaps. In this paper it is shown that using sparse representation with dictionaries learned by SI-FSDL on the FHR signals with missing samples provides a reconstruction with improved properties compared to previously used techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effective Network Area for Efficient Simulation of Finite Area Wireless Networks.\n \n \n \n \n\n\n \n Fereydooni, M.; MÜller, M. K.; and Rupp, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1512-1516, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EffectivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553111,\n  author = {M. Fereydooni and M. K. MÜller and M. Rupp},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Effective Network Area for Efficient Simulation of Finite Area Wireless Networks},\n  year = {2018},\n  pages = {1512-1516},\n  abstract = {A wide-spread approach for modeling and for the performance evaluation of wireless networks is employing Poisson point processes (PPPs). There, the general assumption is that an infinite number of nodes is distributed in the network environment. This is not problematic in a purely analytic framework, but is unfeasible when Monte-Carlo system level simulations are employed for network evaluation. In order to obtain results in a finite simulation-duration, the simulation area also has to be finite, which leads to a deviation of performance results when compared to analytical approach with infinitely many base stations. This deviation however is only small when the simulation area is sufficiently large. In this paper we discuss which part of the simulation area yields results whose deviation from the analytical results does not exceed a predefined threshold. We present an approximation that is based on the base station geometry and depends on the base station density. Additionally, we discuss the relationship of the minimal simulation area (the smallest area that allows to obtain reliable coverage results) and the reduced simulation overhead by increasing the simulation area.},\n  keywords = {approximation theory;Monte Carlo methods;radio networks;stochastic processes;performance evaluation;Poisson point processes;Monte-Carlo system level simulations;finite area wireless network simulation;PPP;base station geometry;base station density;Analytical models;Base stations;Monte Carlo methods;Cellular networks;Europe;Signal processing;Reliability;System level simulations;finite area networks;point processes;wireless cellular networks},\n  doi = {10.23919/EUSIPCO.2018.8553111},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437252.pdf},\n}\n\n
\n
\n\n\n
\n A wide-spread approach for modeling and for the performance evaluation of wireless networks is employing Poisson point processes (PPPs). There, the general assumption is that an infinite number of nodes is distributed in the network environment. This is not problematic in a purely analytic framework, but is unfeasible when Monte-Carlo system level simulations are employed for network evaluation. In order to obtain results in a finite simulation-duration, the simulation area also has to be finite, which leads to a deviation of performance results when compared to analytical approach with infinitely many base stations. This deviation however is only small when the simulation area is sufficiently large. In this paper we discuss which part of the simulation area yields results whose deviation from the analytical results does not exceed a predefined threshold. We present an approximation that is based on the base station geometry and depends on the base station density. Additionally, we discuss the relationship of the minimal simulation area (the smallest area that allows to obtain reliable coverage results) and the reduced simulation overhead by increasing the simulation area.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral Image Fusion from Compressive Projections Using Total-Variation and Low-Rank Regularizations.\n \n \n \n \n\n\n \n Gelvez, T.; and Arguello, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1985-1989, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553112,\n  author = {T. Gelvez and H. Arguello},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectral Image Fusion from Compressive Projections Using Total-Variation and Low-Rank Regularizations},\n  year = {2018},\n  pages = {1985-1989},\n  abstract = {This work presents a spectral image fusion approach from compressive projections based on the linear mixture model that exploits the endmember matrix low dimensional structure. The formulated inverse problem includes a total variation term over the abundance matrix to promote smoothness, but also a low rank term over the endmember matrix to promote the low rank structure. The optimization problem is solved using an alternating direction method of multipliers (ADMM) approach to independently estimate the abundance and endmember matrices. Simulations show that the fusion problem can be effectively solved from compressive projections, and the inclusion of the low rank regularization increases the reconstruction quality.},\n  keywords = {image fusion;image reconstruction;inverse problems;matrix algebra;optimisation;low-rank regularization structure;total-variation regularizations;alternating direction method of multipliers approach;ADMM approach;spectral image fusion approach;multipliers approach;optimization problem;formulated inverse problem;endmember matrix low dimensional structure;linear mixture model;compressive projections;Signal processing algorithms;Image coding;Inverse problems;TV;Spatial resolution;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553112},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437426.pdf},\n}\n\n
\n
\n\n\n
\n This work presents a spectral image fusion approach from compressive projections based on the linear mixture model that exploits the endmember matrix low dimensional structure. The formulated inverse problem includes a total variation term over the abundance matrix to promote smoothness, but also a low rank term over the endmember matrix to promote the low rank structure. The optimization problem is solved using an alternating direction method of multipliers (ADMM) approach to independently estimate the abundance and endmember matrices. Simulations show that the fusion problem can be effectively solved from compressive projections, and the inclusion of the low rank regularization increases the reconstruction quality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind Multi-class Ensemble Learning with Dependent Classifiers.\n \n \n \n \n\n\n \n Traganitis, P. A.; and Giannakis, G. B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2025-2029, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553113,\n  author = {P. A. Traganitis and G. B. Giannakis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Blind Multi-class Ensemble Learning with Dependent Classifiers},\n  year = {2018},\n  pages = {2025-2029},\n  abstract = {In recent years, advances in pattern recognition and data analytics have spurred the development of a plethora of machine learning algorithms and tools. However, as each algorithm exhibits different behavior for different types of data, one is motivated to judiciously fuse multiple algorithms in order to find the “best” performing one, for a given dataset. Ensemble learning aims to create such a high-performance meta-learner, by combining the outputs from multiple algorithms. The present work introduces a simple blind scheme for learning from ensembles of classifiers. Blind refers to the combiner who has no knowledge of the ground-truth labels that each classifier has been trained on. While most current works presume that all classifiers are independent, this work introduces a scheme that can handle dependencies between classifiers. Preliminary tests on synthetic data showcase the potential of the proposed approach.},\n  keywords = {data analysis;learning (artificial intelligence);pattern classification;synthetic data;blind multiclass ensemble learning;dependent classifiers;pattern recognition;data analytics;machine learning algorithms;multiple algorithms;high-performance meta-learner;simple blind scheme;ground-truth labels;Frequency modulation;Signal processing algorithms;Machine learning algorithms;Task analysis;Maximum likelihood estimation;Reliability;Ensemble learning;multi-class classification;unsupervised;dependent classifiers},\n  doi = {10.23919/EUSIPCO.2018.8553113},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439392.pdf},\n}\n\n
\n
\n\n\n
\n In recent years, advances in pattern recognition and data analytics have spurred the development of a plethora of machine learning algorithms and tools. However, as each algorithm exhibits different behavior for different types of data, one is motivated to judiciously fuse multiple algorithms in order to find the “best” performing one, for a given dataset. Ensemble learning aims to create such a high-performance meta-learner, by combining the outputs from multiple algorithms. The present work introduces a simple blind scheme for learning from ensembles of classifiers. Blind refers to the combiner who has no knowledge of the ground-truth labels that each classifier has been trained on. While most current works presume that all classifiers are independent, this work introduces a scheme that can handle dependencies between classifiers. Preliminary tests on synthetic data showcase the potential of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Short-Duration Doppler Spectrogram for Person Recognition with a Handheld Radar.\n \n \n \n \n\n\n \n Ulrich, M.; and Yang, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1227-1231, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Short-DurationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553114,\n  author = {M. Ulrich and B. Yang},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Short-Duration Doppler Spectrogram for Person Recognition with a Handheld Radar},\n  year = {2018},\n  pages = {1227-1231},\n  abstract = {This paper examines the classification of walking, standing and mirrored persons based on radar micro-Doppler (m-D) measurements to resolve ambiguities in thermal infrared (TIR) mirror images in firefighting. If the walking or standing person is observed directly, its m-D is measured. In the case of a person mirrored on a reflecting object, only the m-D of the reflecting object is measured. Their spectrogram is differentiable which enables a classification. One difficulty is the random movement of the handheld radar which leads to short observation durations and Doppler blurring. A classification based on short spectrograms is proposed, where the influence of the short-time Fourier transform window length is investigated. Furthermore, a regularization is proposed to improve the classifier interpretability for this safety application.},\n  keywords = {Doppler radar;Fourier transforms;image classification;infrared imaging;object recognition;radar imaging;short-duration Doppler spectrogram;person recognition;handheld radar;reflecting object;short observation durations;Doppler blurring;walking person classification;standing person classification;radar microDoppler measurements;mirrored person classification;thermal infrared mirror images;TIR mirror images;short-time Fourier transform window length;classifier interpretability;Spectrogram;Doppler effect;Doppler radar;Legged locomotion;Radar imaging;Signal resolution},\n  doi = {10.23919/EUSIPCO.2018.8553114},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437181.pdf},\n}\n\n
\n
\n\n\n
\n This paper examines the classification of walking, standing and mirrored persons based on radar micro-Doppler (m-D) measurements to resolve ambiguities in thermal infrared (TIR) mirror images in firefighting. If the walking or standing person is observed directly, its m-D is measured. In the case of a person mirrored on a reflecting object, only the m-D of the reflecting object is measured. Their spectrogram is differentiable which enables a classification. One difficulty is the random movement of the handheld radar which leads to short observation durations and Doppler blurring. A classification based on short spectrograms is proposed, where the influence of the short-time Fourier transform window length is investigated. Furthermore, a regularization is proposed to improve the classifier interpretability for this safety application.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reflection Analysis for Face Morphing Attack Detection.\n \n \n \n \n\n\n \n Seibold, C.; Hilsmann, A.; and Eisert, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1022-1026, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ReflectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553116,\n  author = {C. Seibold and A. Hilsmann and P. Eisert},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Reflection Analysis for Face Morphing Attack Detection},\n  year = {2018},\n  pages = {1022-1026},\n  abstract = {A facial morph is a synthetically created image of a face that looks similar to two different individuals and can even trick biometric facial recognition systems into recognizing both individuals. This attack is known as face morphing attack. The process of creating such a facial morph is well documented and a lot of tutorials and software to create them are freely available. Therefore, it is mandatory to be able to detect this kind of fraud to ensure the integrity of the face as reliable biometric feature. In this work, we study the effects of face morphing on the physically correctness of the illumination. We estimate the direction to the light sources based on specular highlights in the eyes and use them to generate a synthetic map for highlights on the skin. This map is compared with the highlights in the image that is suspected to be a fraud. Morphing faces with different geometries, a bad alignment of the source images or using images with different illuminations, can lead to inconsistencies in reflections that indicate the existence of a morphing attack.},\n  keywords = {biometrics (access control);face recognition;biometric feature;geometries;illuminations;face morphing attack detection;reflection analysis;source images;synthetic map;fraud;software;biometric facial recognition systems;synthetically created image;facial morph;Face;Light sources;Skin;Solid modeling;Lighting;Estimation;Facial features;face morphing detection;reflection analysis;illumination estimation},\n  doi = {10.23919/EUSIPCO.2018.8553116},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437923.pdf},\n}\n\n
\n
\n\n\n
\n A facial morph is a synthetically created image of a face that looks similar to two different individuals and can even trick biometric facial recognition systems into recognizing both individuals. This attack is known as face morphing attack. The process of creating such a facial morph is well documented and a lot of tutorials and software to create them are freely available. Therefore, it is mandatory to be able to detect this kind of fraud to ensure the integrity of the face as reliable biometric feature. In this work, we study the effects of face morphing on the physically correctness of the illumination. We estimate the direction to the light sources based on specular highlights in the eyes and use them to generate a synthetic map for highlights on the skin. This map is compared with the highlights in the image that is suspected to be a fraud. Morphing faces with different geometries, a bad alignment of the source images or using images with different illuminations, can lead to inconsistencies in reflections that indicate the existence of a morphing attack.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning Dictionary-Based Unions of Subspaces for Image Denoising.\n \n \n \n \n\n\n \n Hong, D.; Malinas, R. P.; Fessler, J. A.; and Balzano, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1597-1601, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LearningPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553117,\n  author = {D. Hong and R. P. Malinas and J. A. Fessler and L. Balzano},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Learning Dictionary-Based Unions of Subspaces for Image Denoising},\n  year = {2018},\n  pages = {1597-1601},\n  abstract = {Many signals of interest are well-approximated by sparse linear combinations of atomic signals from a dictionary. Equivalently, they are well-approximated by low-dimensional subspaces in a union of subspaces generated by the dictionary. A given sparsity level has an associated union of subspaces (UoS) generated by sparse combinations of correspondingly many atoms. When considering a sequence of sparsity levels, we have a sequence of unions of subspaces (SUoS) of increasing dimension. This paper considers the problem of learning such an SUoS from data. While each UoS is combinatorially large with respect to sparsity level, our learning approach exploits the fact that sparsity is structured for many signals of interest, i.e., that certain collections of atoms are more frequently used together than others. This is known as group sparsity structure and has been studied extensively when the structure is known a priori. We consider the setting where the structure is unknown, and we seek to learn it from training data. We also adapt the subspaces we obtain to improve representation and parsimony, similar to the goal of adapting atoms in dictionary learning. We illustrate the benefits of the learned dictionary-based SUoS for the problem of denoising; using a more parsimonious and representative SUoS results in improved recovery of complicated structures and edges.},\n  keywords = {image denoising;image representation;learning (artificial intelligence);sparse matrices;group sparsity structure;dictionary learning;learned dictionary-based SUoS;sparse linear combinations;atomic signals;low-dimensional subspaces;learning approach;image denoising;learning dictionary-based unions;Dictionaries;Training;Hidden Markov models;Noise reduction;Signal processing;Machine learning;Europe;unions of subspaces;structured sparsity;dictionary learning},\n  doi = {10.23919/EUSIPCO.2018.8553117},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437452.pdf},\n}\n\n
\n
\n\n\n
\n Many signals of interest are well-approximated by sparse linear combinations of atomic signals from a dictionary. Equivalently, they are well-approximated by low-dimensional subspaces in a union of subspaces generated by the dictionary. A given sparsity level has an associated union of subspaces (UoS) generated by sparse combinations of correspondingly many atoms. When considering a sequence of sparsity levels, we have a sequence of unions of subspaces (SUoS) of increasing dimension. This paper considers the problem of learning such an SUoS from data. While each UoS is combinatorially large with respect to sparsity level, our learning approach exploits the fact that sparsity is structured for many signals of interest, i.e., that certain collections of atoms are more frequently used together than others. This is known as group sparsity structure and has been studied extensively when the structure is known a priori. We consider the setting where the structure is unknown, and we seek to learn it from training data. We also adapt the subspaces we obtain to improve representation and parsimony, similar to the goal of adapting atoms in dictionary learning. We illustrate the benefits of the learned dictionary-based SUoS for the problem of denoising; using a more parsimonious and representative SUoS results in improved recovery of complicated structures and edges.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse Phase Retrieval Via Iteratively Reweighted Amplitude Flow.\n \n \n \n \n\n\n \n Wang, G.; Zhang, L.; Giannakis, G. B.; and Chen, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 712-716, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553118,\n  author = {G. Wang and L. Zhang and G. B. Giannakis and J. Chen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse Phase Retrieval Via Iteratively Reweighted Amplitude Flow},\n  year = {2018},\n  pages = {712-716},\n  abstract = {Sparse phase retrieval (PR) aims at reconstructing a sparse signal vector from a few phaseless linear measurements. It emerges naturally in diverse applications, but it is NP-hard in general. Drawing from advances in nonconvex optimization, this paper presents a new algorithm that is termed compressive reweighted amplitude flow (CRAF) for sparse PR. CRAF operates in two stages: Stage one computes an initial guess by means of a new spectral procedure, and stage two implements a few hard thresholding based iteratively reweighted gradient iterations on the amplitude-based least-squares cost. When there are sufficient measurements, CRAF reconstructs the true signal vector exactly under suitable conditions. Furthermore, its sample complexity coincides with that of the state-of-the-art approaches. Numerical experiments showcase improved performance of the proposed approach relative to existing alternatives.},\n  keywords = {computational complexity;concave programming;data compression;gradient methods;iterative methods;least squares approximations;signal processing;vectors;sparse phase retrieval;sparse signal vector;phaseless linear measurements;diverse applications;compressive reweighted amplitude flow;CRAF;sparse PR;hard thresholding;iteratively reweighted gradient iterations;amplitude-based least-squares cost;iteratively reweighted amplitude flow;Phase measurement;Europe;Signal processing;Image reconstruction;Signal processing algorithms;Thresholding (Imaging);Indexes;Sparse recovery;spectral initialization;model-based hard thresholding;linear convergence},\n  doi = {10.23919/EUSIPCO.2018.8553118},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435983.pdf},\n}\n\n
\n
\n\n\n
\n Sparse phase retrieval (PR) aims at reconstructing a sparse signal vector from a few phaseless linear measurements. It emerges naturally in diverse applications, but it is NP-hard in general. Drawing from advances in nonconvex optimization, this paper presents a new algorithm that is termed compressive reweighted amplitude flow (CRAF) for sparse PR. CRAF operates in two stages: Stage one computes an initial guess by means of a new spectral procedure, and stage two implements a few hard thresholding based iteratively reweighted gradient iterations on the amplitude-based least-squares cost. When there are sufficient measurements, CRAF reconstructs the true signal vector exactly under suitable conditions. Furthermore, its sample complexity coincides with that of the state-of-the-art approaches. Numerical experiments showcase improved performance of the proposed approach relative to existing alternatives.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Pipeline Comparison for the Pre-Processing of Resting-State Data in Epilepsy.\n \n \n \n \n\n\n \n De Blasi, B.; Galazzo, I. B.; Pasetto, L.; Storti, S. F.; Koepp, M.; Barnes, A.; and Menegaz, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1137-1141, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PipelinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553119,\n  author = {B. {De Blasi} and I. B. Galazzo and L. Pasetto and S. F. Storti and M. Koepp and A. Barnes and G. Menegaz},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Pipeline Comparison for the Pre-Processing of Resting-State Data in Epilepsy},\n  year = {2018},\n  pages = {1137-1141},\n  abstract = {Noise removal is a critical step to recover the signal of interest from resting-state fMRI data. Several pre-processing pipelines have been developed mainly based on nuisance regression or independent component analysis. The aim of this work was to evaluate the ability in removing spurious non-BO LD signals of different cleaning pipelines when applied to a dataset of healthy controls and temporal lobe epilepsy patients. Increased tSNR and power spectral density in the resting-state frequency range (0.01-0.1 Hz) were found for all pre-processing pipelines with respect to the minimally pre-processed data, suggesting a positive gain in terms of temporal properties when optimal cleaning procedures are applied to the acquired fMRI data. All the pre-processing pipelines considered were able to recover the DMN through group ICA. By visually comparing this network across all the pipelines and groups, we found that AROMA, SPM12, FIX and FIXMC were able to better delineate the posterior cingulate cortex.},\n  keywords = {biomedical MRI;brain;independent component analysis;medical image processing;neurophysiology;minimally pre-processed data;resting-state frequency range;temporal lobe epilepsy patients;nonBO LD signals;pre-processing pipelines;resting-state fMRI data;resting-state data;pipeline comparison;Pipelines;Standards;Functional magnetic resonance imaging;Cleaning;Time series analysis;Spectral analysis;Epilepsy;resting-state fMRI;pre-processing pipelines;epilepsy;default mode network},\n  doi = {10.23919/EUSIPCO.2018.8553119},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437782.pdf},\n}\n\n
\n
\n\n\n
\n Noise removal is a critical step to recover the signal of interest from resting-state fMRI data. Several pre-processing pipelines have been developed mainly based on nuisance regression or independent component analysis. The aim of this work was to evaluate the ability in removing spurious non-BO LD signals of different cleaning pipelines when applied to a dataset of healthy controls and temporal lobe epilepsy patients. Increased tSNR and power spectral density in the resting-state frequency range (0.01-0.1 Hz) were found for all pre-processing pipelines with respect to the minimally pre-processed data, suggesting a positive gain in terms of temporal properties when optimal cleaning procedures are applied to the acquired fMRI data. All the pre-processing pipelines considered were able to recover the DMN through group ICA. By visually comparing this network across all the pipelines and groups, we found that AROMA, SPM12, FIX and FIXMC were able to better delineate the posterior cingulate cortex.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convolutional Neural Networks for Heart Sound Segmentation.\n \n \n \n \n\n\n \n Renna, F.; Oliveira, J.; and Coimbra, M. T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 757-761, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ConvolutionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553120,\n  author = {F. Renna and J. Oliveira and M. T. Coimbra},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Convolutional Neural Networks for Heart Sound Segmentation},\n  year = {2018},\n  pages = {757-761},\n  abstract = {In this paper, deep convolutional neural networks are used to segment heart sounds into their main components. The proposed method is based on the adoption of a novel deep convolutional neural network architecture, which is inspired by similar approaches used for image segmentation. A further post-processing step is applied to the output of the proposed neural network, which induces the output state sequence to be consistent with the natural sequence of states within a heart sound signal (S1, systole, S2, diastole). The proposed approach is tested on heart sound signals longer than 5 seconds from the publicly available PhysioNet dataset, and it is shown to outperform current state-of-the-art segmentation methods by achieving an average sensitivity of 93.4% and an average positive predictive value of 94.5% in detecting S1 and S2 sounds.},\n  keywords = {cardiology;feature extraction;feedforward neural nets;medical signal processing;signal classification;heart sound segmentation;heart sound signal;output state sequence;post-processing step;image segmentation;deep convolutional neural network architecture;deep convolutional neural networks;Heart;Phonocardiography;Hidden Markov models;Signal processing algorithms;Testing;Convolution;Image segmentation},\n  doi = {10.23919/EUSIPCO.2018.8553120},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436805.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, deep convolutional neural networks are used to segment heart sounds into their main components. The proposed method is based on the adoption of a novel deep convolutional neural network architecture, which is inspired by similar approaches used for image segmentation. A further post-processing step is applied to the output of the proposed neural network, which induces the output state sequence to be consistent with the natural sequence of states within a heart sound signal (S1, systole, S2, diastole). The proposed approach is tested on heart sound signals longer than 5 seconds from the publicly available PhysioNet dataset, and it is shown to outperform current state-of-the-art segmentation methods by achieving an average sensitivity of 93.4% and an average positive predictive value of 94.5% in detecting S1 and S2 sounds.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Combined Analysis-L1 and Total Variation ADMM with Applications to MEG Brain Imaging and Signal Reconstruction.\n \n \n \n \n\n\n \n Gao, R.; Tronarp, F.; and Särkkä, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1930-1934, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CombinedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553122,\n  author = {R. Gao and F. Tronarp and S. Särkkä},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Combined Analysis-L1 and Total Variation ADMM with Applications to MEG Brain Imaging and Signal Reconstruction},\n  year = {2018},\n  pages = {1930-1934},\n  abstract = {In this article, we propose an efficient method for solving analysis-l1-TV regularization problems with a multi-step alternating direction method of multipliers (ADMM) approach as the fast solver. Additionally, we apply it to a real-data magnetoen-cephalography (MEG) brain imaging problem as well as to signal reconstruction. In our approach, the inverse problem arising in MEG or signal reconstruction is formulated as an optimization problem which we regularize using a combination of analysis-l1 prior together with a total variation (TV) regularization term. We then formulate an optimization algorithm based on ADMM which can effectively be used to solve the optimization problems. The performance of the algorithm is illustrated in practical scenarios.},\n  keywords = {inverse problems;magnetoencephalography;medical signal processing;optimisation;signal reconstruction;total variation ADMM;MEG brain imaging;signal reconstruction;analysis-l1-TV regularization problems;inverse problem;optimization problem;total variation regularization term;combined analysis-L1;multistep alternating direction method of multipliers;real-data magnetoencephalography;Image reconstruction;TV;Brain;Signal reconstruction;Signal processing algorithms;Inverse problems;Optimization;Analysis-l1- TV-norm;total variation (TV);alternating direction method of multipliers (ADMM);magnetoen-cephalography (MEG);image reconstruction},\n  doi = {10.23919/EUSIPCO.2018.8553122},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436769.pdf},\n}\n\n
\n
\n\n\n
\n In this article, we propose an efficient method for solving analysis-l1-TV regularization problems with a multi-step alternating direction method of multipliers (ADMM) approach as the fast solver. Additionally, we apply it to a real-data magnetoen-cephalography (MEG) brain imaging problem as well as to signal reconstruction. In our approach, the inverse problem arising in MEG or signal reconstruction is formulated as an optimization problem which we regularize using a combination of analysis-l1 prior together with a total variation (TV) regularization term. We then formulate an optimization algorithm based on ADMM which can effectively be used to solve the optimization problems. The performance of the algorithm is illustrated in practical scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech Enhancement by Classification of Noisy Signals Decomposed Using NMF and Wiener Filtering.\n \n \n \n \n\n\n \n Fakhry, M.; Poorjam, A. H.; and Christensen, M. G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 16-20, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpeechPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553123,\n  author = {M. Fakhry and A. H. Poorjam and M. G. Christensen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Speech Enhancement by Classification of Noisy Signals Decomposed Using NMF and Wiener Filtering},\n  year = {2018},\n  pages = {16-20},\n  abstract = {Supervised non-negative matrix factorization (NMF) is effective in speech enhancement through training spectral models of speech and noise signals. However, the enhancement quality reduces when the models are trained on data that is not highly relevant to a speech signal and a noise signal in a noisy observation. In this paper, we propose to train a classifier in order to overcome such poor characterization of the signals through the trained models. The main idea is to decompose the noisy observation into parts and the enhanced signal is reconstructed by combining the less-corrupted ones which are identified in the cepstral domain using the trained classifier. We apply unsupervised NMF followed by Wiener filtering for the decomposition, and use a support vector machine trained on the mel-frequency cepstral coefficients of the parts of training speech and noise signals for the classification. The results show the effectiveness of the proposed method compared with the supervised NMF.},\n  keywords = {cepstral analysis;feature extraction;learning (artificial intelligence);matrix decomposition;speech enhancement;support vector machines;Wiener filters;speech enhancement;noisy signals decomposed;supervised nonnegative matrix factorization;training spectral models;noise signals;enhancement quality;speech signal;noise signal;noisy observation;trained models;enhanced signal;supervised NMF;training speech;Wiener filtering;unsupervised NMF;trained classifier;Noise measurement;Training;Speech enhancement;Support vector machines;Mel frequency cepstral coefficient;Time-domain analysis;Wiener filters;Speech enhancement;signal decomposition;unsupervised NMF;Wiener filtering;SVM},\n  doi = {10.23919/EUSIPCO.2018.8553123},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437930.pdf},\n}\n\n
\n
\n\n\n
\n Supervised non-negative matrix factorization (NMF) is effective in speech enhancement through training spectral models of speech and noise signals. However, the enhancement quality reduces when the models are trained on data that is not highly relevant to a speech signal and a noise signal in a noisy observation. In this paper, we propose to train a classifier in order to overcome such poor characterization of the signals through the trained models. The main idea is to decompose the noisy observation into parts and the enhanced signal is reconstructed by combining the less-corrupted ones which are identified in the cepstral domain using the trained classifier. We apply unsupervised NMF followed by Wiener filtering for the decomposition, and use a support vector machine trained on the mel-frequency cepstral coefficients of the parts of training speech and noise signals for the classification. The results show the effectiveness of the proposed method compared with the supervised NMF.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Complexity-Reduced Solution for TDOA Source Localization in Large Equal Radius Scenario with Sensor Position Errors.\n \n \n \n \n\n\n \n Li, X.; Guo, F.; Yang, L.; and Ho, K. C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 361-365, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Complexity-ReducedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553125,\n  author = {X. Li and F. Guo and L. Yang and K. C. Ho},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Complexity-Reduced Solution for TDOA Source Localization in Large Equal Radius Scenario with Sensor Position Errors},\n  year = {2018},\n  pages = {361-365},\n  abstract = {This paper presents a new algebraic solution for source localization using time difference of arrival (TDOA) measurements in the large equal radius (LER) scenario when the known sensor positions have random errors. The proposed method utilizes the LER condition to directly approximate the true TDOAs so that the originally nonlinear TDOA equations can be recast into ones linearly related to the source position. This enables the use of the closed-form weighted least squares (WLS) technique for source localization and makes the proposed method have lower complexity than the existing technique. The approximate efficiency of the new algorithm is established analytically under strong LER condition. The associated approximation bias is also derived and it is shown numerically to be greater than that of the benchmark technique, especially when LER condition is weak. However, through iterating the proposed method once with bias correction, the proposed method yields comparable localization accuracy with reduced complexity. The theoretical developments are validated by computer simulations.},\n  keywords = {direction-of-arrival estimation;least squares approximations;time-of-arrival estimation;sensor positions;comparable localization accuracy;time difference of arrival measurements;LER condition;algebraic solution;sensor position errors;TDOA source localization;complexity-reduced solution;benchmark technique;associated approximation bias;approximate efficiency;closed-form weighted least squares technique;source position;originally nonlinear TDOA equations;random errors;equal radius scenario;Closed-form solutions;Complexity theory;Covariance matrices;Europe;Signal processing;Noise measurement;Estimation},\n  doi = {10.23919/EUSIPCO.2018.8553125},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436849.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a new algebraic solution for source localization using time difference of arrival (TDOA) measurements in the large equal radius (LER) scenario when the known sensor positions have random errors. The proposed method utilizes the LER condition to directly approximate the true TDOAs so that the originally nonlinear TDOA equations can be recast into ones linearly related to the source position. This enables the use of the closed-form weighted least squares (WLS) technique for source localization and makes the proposed method have lower complexity than the existing technique. The approximate efficiency of the new algorithm is established analytically under strong LER condition. The associated approximation bias is also derived and it is shown numerically to be greater than that of the benchmark technique, especially when LER condition is weak. However, through iterating the proposed method once with bias correction, the proposed method yields comparable localization accuracy with reduced complexity. The theoretical developments are validated by computer simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-Time Hand Gesture Recognition Based on Artificial Feed-Forward Neural Networks and EMG.\n \n \n \n \n\n\n \n Benalcázar, M. E.; Anchundia, C. E.; Zea, J. A.; Zambrano, P.; Jaramillo, A. G.; and Segura, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1492-1496, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Real-TimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553126,\n  author = {M. E. Benalcázar and C. E. Anchundia and J. A. Zea and P. Zambrano and A. G. Jaramillo and M. Segura},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-Time Hand Gesture Recognition Based on Artificial Feed-Forward Neural Networks and EMG},\n  year = {2018},\n  pages = {1492-1496},\n  abstract = {In this paper, we propose a real-time hand gesture recognition model. This model is based on both a shallow feedforward neural network with 3 layers and an electromyography (EMG) of the forearm. The structure of the proposed model is composed of 5 modules: data acquisition using the commercial device Myo armband and a sliding window approach, preprocessing, automatic feature extraction, classification, and postprocessing. The proposed model has an accuracy of 90.1% at recognizing 5 categories of gestures (fist, wave-in, wave-out, open, and pinch), and an average time response of 11 ms in a personal computer. The main contributions of this work include (1) a hand gesture recognition model that responds quickly and with relative good accuracy, (2) an automatic method for feature extraction from time series of varying length, and (3) the code and the dataset used for this work, which are made publicly available.},\n  keywords = {data acquisition;electromyography;feature extraction;feedforward neural nets;gesture recognition;time series;EMG;real-time hand gesture recognition model;data acquisition;sliding window approach;time series;feedforward neural network;myo armband;feature extraction;electromyography;feature classification;Electromyography;Computational modeling;Gesture recognition;Feature extraction;Solid modeling;Robot sensing systems;Hidden Markov models;hand gesture recognition;real-time;feed-forward neural networks;electromyography;feature extraction;time series},\n  doi = {10.23919/EUSIPCO.2018.8553126},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439350.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a real-time hand gesture recognition model. This model is based on both a shallow feedforward neural network with 3 layers and an electromyography (EMG) of the forearm. The structure of the proposed model is composed of 5 modules: data acquisition using the commercial device Myo armband and a sliding window approach, preprocessing, automatic feature extraction, classification, and postprocessing. The proposed model has an accuracy of 90.1% at recognizing 5 categories of gestures (fist, wave-in, wave-out, open, and pinch), and an average time response of 11 ms in a personal computer. The main contributions of this work include (1) a hand gesture recognition model that responds quickly and with relative good accuracy, (2) an automatic method for feature extraction from time series of varying length, and (3) the code and the dataset used for this work, which are made publicly available.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Blind Image Quality Metric using a Selection of Relevant Patches based on Convolutional Neural Network.\n \n \n \n \n\n\n \n Chetouani, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1452-1456, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553127,\n  author = {A. Chetouani},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Blind Image Quality Metric using a Selection of Relevant Patches based on Convolutional Neural Network},\n  year = {2018},\n  pages = {1452-1456},\n  abstract = {Image quality assessment is an important field in different computer vision applications. A plethora of metrics has been proposed in the literature to answer this request. In this paper, we propose an image quality framework without reference based on selection of saliency patches and Convolutional Neural Network. The idea is here to not consider all patches of the distorted image but rather some of them, which are considered as the more perceptually relevant and thus impact more the Mean Opinion Score of the image. To do that, we first compute the saliency map of the distorted image. A scanpath prediction method, that aims to reproduce the visual behavior, is then applied to select the more relevant patches. A Convolutional Neural Network model is finally used to predict the quality score. Its input is the selected patches, while its output is the predicted Mean Opinion Score. The proposed was evaluated using four well-known datasets (LIVE-P2, TID 2008, TID 2013 and CSIQ). The results obtained show its efficiency.},\n  keywords = {computer vision;convolution;feedforward neural nets;visual perception;image quality framework;saliency patches;distorted image;saliency map;scanpath prediction method;Convolutional Neural Network model;quality score;predicted Mean Opinion Score;blind image quality metric;image quality assessment;computer vision applications;Protocols;Measurement;Correlation;Image quality;Degradation;Visualization;Europe;Image quality;model;Saliency;Scanpath prediction},\n  doi = {10.23919/EUSIPCO.2018.8553127},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438790.pdf},\n}\n\n
\n
\n\n\n
\n Image quality assessment is an important field in different computer vision applications. A plethora of metrics has been proposed in the literature to answer this request. In this paper, we propose an image quality framework without reference based on selection of saliency patches and Convolutional Neural Network. The idea is here to not consider all patches of the distorted image but rather some of them, which are considered as the more perceptually relevant and thus impact more the Mean Opinion Score of the image. To do that, we first compute the saliency map of the distorted image. A scanpath prediction method, that aims to reproduce the visual behavior, is then applied to select the more relevant patches. A Convolutional Neural Network model is finally used to predict the quality score. Its input is the selected patches, while its output is the predicted Mean Opinion Score. The proposed was evaluated using four well-known datasets (LIVE-P2, TID 2008, TID 2013 and CSIQ). The results obtained show its efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Bayesian Blind Source Separation Method for a Linear-quadratic Model.\n \n \n \n \n\n\n \n Madrolle, S.; Duarte, L. T.; Grangeat, P.; and Jutten, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1242-1246, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553128,\n  author = {S. Madrolle and L. T. Duarte and P. Grangeat and C. Jutten},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Bayesian Blind Source Separation Method for a Linear-quadratic Model},\n  year = {2018},\n  pages = {1242-1246},\n  abstract = {We propose a blind source separation method based on Bayesian inference in order to separate two sources in a linear-quadratic mixing model. This nonlinear model describes for example the response of a non-selective metal-oxide gas sensor (MOX) in presence of two gases such as acetone and ethanol diluted in an air buffer. In order to quantify the gas components, it is necessary to inverse the linear-quadratic model. In addition, we look for reducing the number of samples for the calibration step. Therefore, we here propose a Bayesian blind source separation method, with only few points of calibration and which is based on Monte Carlo Markov Chain (MCMC) sampling methods to estimate the mean of the posterior distribution. We analyze the performance on a set of simulated samples. We use a cross-validation approach, with three steps: first, we blindly estimate the mixing coefficients and sources; second, we correct the scale factors thanks to few calibration samples; and third, we validate the method on validation samples, estimating sources thanks to mixing coefficients estimated before on the calibration samples. We compare this unsupervised nonlinear method with a supervised method to evaluate the performance with respect to the number of calibration points: with 10 calibration points instead of 160, the performance achieves 11 dB, with a loss limited to 1.5 dB.},\n  keywords = {Bayes methods;blind source separation;calibration;gas sensors;inference mechanisms;Markov processes;Monte Carlo methods;sampling methods;Bayesian blind source separation method;Bayesian inference;linear-quadratic mixing model;nonselective metal-oxide gas sensor;mixing coefficients;unsupervised nonlinear method;Monte Carlo-Markov chain sampling methods;MCMC sampling methods;Bayes methods;Calibration;Blind source separation;Atmospheric modeling;Signal processing algorithms;Mathematical model;Linear-quadratic model;Inverse problems;Statistical signal processing;Bayesian method;Blind source separation;Monte Carlo Markov Chain (MCMC);Gas mixture;MOX sensor},\n  doi = {10.23919/EUSIPCO.2018.8553128},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434686.pdf},\n}\n\n
\n
\n\n\n
\n We propose a blind source separation method based on Bayesian inference in order to separate two sources in a linear-quadratic mixing model. This nonlinear model describes for example the response of a non-selective metal-oxide gas sensor (MOX) in presence of two gases such as acetone and ethanol diluted in an air buffer. In order to quantify the gas components, it is necessary to inverse the linear-quadratic model. In addition, we look for reducing the number of samples for the calibration step. Therefore, we here propose a Bayesian blind source separation method, with only few points of calibration and which is based on Monte Carlo Markov Chain (MCMC) sampling methods to estimate the mean of the posterior distribution. We analyze the performance on a set of simulated samples. We use a cross-validation approach, with three steps: first, we blindly estimate the mixing coefficients and sources; second, we correct the scale factors thanks to few calibration samples; and third, we validate the method on validation samples, estimating sources thanks to mixing coefficients estimated before on the calibration samples. We compare this unsupervised nonlinear method with a supervised method to evaluate the performance with respect to the number of calibration points: with 10 calibration points instead of 160, the performance achieves 11 dB, with a loss limited to 1.5 dB.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of EEG signals based on mean-square error optimal time-frequency features.\n \n \n \n \n\n\n \n Anderson, R.; and Sandsten, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 106-110, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553130,\n  author = {R. Anderson and M. Sandsten},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification of EEG signals based on mean-square error optimal time-frequency features},\n  year = {2018},\n  pages = {106-110},\n  abstract = {This paper illustrates the improvement in accuracy of classification for electroencephalogram (EEG) signals measured during a memory encoding task, by using features based on a mean square error (MSE) optimal time-frequency estimator. The EEG signals are modelled as Locally Stationary Processes, based on the modulation in time of an ordinary stationary covariance function. After estimating the model parameters, we compute the MSE optimal kernel for the estimation of the Wigner-Ville spectrum. We present a simulation study to evaluate the performance of the derived optimal spectral estimator, compared to the single windowed Hanning spectrogram and the Welch spectrogram. Further, the estimation procedure is applied to the measured EEG and the time-frequency features extracted from the spectral estimates are used to feed a neural network classifier. Consistent improvement in classification accuracy is obtained by using the features from the proposed estimator, compared to the use of existing methods.},\n  keywords = {covariance analysis;electroencephalography;feature extraction;mean square error methods;medical signal processing;neural nets;signal classification;spectral analysis;time-frequency analysis;Wigner distribution;mean-square error optimal time-frequency features;electroencephalogram signals;memory encoding task;ordinary stationary covariance function;MSE optimal kernel;single windowed Hanning spectrogram;classification accuracy;neural network classifier;time-frequency feature extraction;Welch spectrogram;Wigner-Ville spectrum estimation;spectral estimator;locally stationary processes;EEG signal classification;Time-frequency analysis;Brain modeling;Electroencephalography;Kernel;Computational modeling;Feature extraction;Spectrogram},\n  doi = {10.23919/EUSIPCO.2018.8553130},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437640.pdf},\n}\n\n
\n
\n\n\n
\n This paper illustrates the improvement in accuracy of classification for electroencephalogram (EEG) signals measured during a memory encoding task, by using features based on a mean square error (MSE) optimal time-frequency estimator. The EEG signals are modelled as Locally Stationary Processes, based on the modulation in time of an ordinary stationary covariance function. After estimating the model parameters, we compute the MSE optimal kernel for the estimation of the Wigner-Ville spectrum. We present a simulation study to evaluate the performance of the derived optimal spectral estimator, compared to the single windowed Hanning spectrogram and the Welch spectrogram. Further, the estimation procedure is applied to the measured EEG and the time-frequency features extracted from the spectral estimates are used to feed a neural network classifier. Consistent improvement in classification accuracy is obtained by using the features from the proposed estimator, compared to the use of existing methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Acoustic Event Classification Using Multi-Resolution HMM.\n \n \n \n \n\n\n \n Baggenstoss, P. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 972-976, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AcousticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553131,\n  author = {P. M. Baggenstoss},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Acoustic Event Classification Using Multi-Resolution HMM},\n  year = {2018},\n  pages = {972-976},\n  abstract = {Real-world acoustic events span a wide range of time and frequency resolutions, from short clicks to longer tonals. This is a challenge for the hidden Markov model (HMM), which uses a fixed segmentation and feature extraction, forcing a compromise between time and frequency resolution. The multiresolution HMM (MR-HMM) is an extension of the HMM that assumes not only an underlying (hidden) random state sequence, but also an underlying random segmentation, with segments spanning a wide range of sizes and processed using a variety of feature extraction methods. It is shown that the MR-HMM alone, as an acoustic event classifier, has performance comparable to state of the art discriminative classifiers on three open data sets. However, as a generative classifier, the MR-HMM models the underlying data generation process and can generate synthetic data, allowing weaknesses of individual class models to be discovered and corrected. To demonstrate this point, the MR-HMM is combined with auxiliary features that capture temporal information, resulting in significantly improved performance.},\n  keywords = {acoustic signal processing;feature extraction;hidden Markov models;signal classification;signal resolution;generative classifier;MR-HMM models;acoustic event classification;multiresolution HMM;hidden Markov model;fixed segmentation;feature extraction methods;acoustic event classifier;data generation process;random state sequence;random segmentation;Hidden Markov models;Feature extraction;Probability density function;Data models;Acoustics;Entropy;Bayes methods},\n  doi = {10.23919/EUSIPCO.2018.8553131},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570426495.pdf},\n}\n\n
\n
\n\n\n
\n Real-world acoustic events span a wide range of time and frequency resolutions, from short clicks to longer tonals. This is a challenge for the hidden Markov model (HMM), which uses a fixed segmentation and feature extraction, forcing a compromise between time and frequency resolution. The multiresolution HMM (MR-HMM) is an extension of the HMM that assumes not only an underlying (hidden) random state sequence, but also an underlying random segmentation, with segments spanning a wide range of sizes and processed using a variety of feature extraction methods. It is shown that the MR-HMM alone, as an acoustic event classifier, has performance comparable to state of the art discriminative classifiers on three open data sets. However, as a generative classifier, the MR-HMM models the underlying data generation process and can generate synthetic data, allowing weaknesses of individual class models to be discovered and corrected. To demonstrate this point, the MR-HMM is combined with auxiliary features that capture temporal information, resulting in significantly improved performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonunitary Joint Diagonalization for Overdetermined Convolutive Blind Signal Separation.\n \n \n \n \n\n\n \n Zhang, W.; and Sun, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1232-1236, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NonunitaryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553132,\n  author = {W. Zhang and J. Sun},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Nonunitary Joint Diagonalization for Overdetermined Convolutive Blind Signal Separation},\n  year = {2018},\n  pages = {1232-1236},\n  abstract = {As is known that nonunitary joint diagonalization (JD) has some advantages over the unitary one in terms of system identification accuracy. However, the existing nonunitary JD algorithms are prone to converge to degenerate (even singular) solutions, which result in deteriorated identification performance. Moreover, the existing algorithms usually seek a square diagonalizing matrix, which greatly limits their application in overdetermined system identification scenario. In order to overcome these drawbacks, we reformulate the nonunitary JD as a multicriteria optimization model. The resulting algorithm can converges to a nonsquare well-conditioned diagonalizing matrix.},\n  keywords = {blind source separation;matrix algebra;overdetermined convolutive blind signal separation;system identification accuracy;deteriorated identification performance;square diagonalizing matrix;overdetermined system identification scenario;nonunitary JD algorithms;nonunitary joint diagonalization algorithm;nonsquare well-conditioned diagonalizing matrix;Signal processing algorithms;Optimization;Frequency-domain analysis;Convergence;Signal to noise ratio;Europe;Joint diagonalization (JD);degenerate solution;convolutive blind source separation (CBSS)},\n  doi = {10.23919/EUSIPCO.2018.8553132},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570429270.pdf},\n}\n\n
\n
\n\n\n
\n As is known that nonunitary joint diagonalization (JD) has some advantages over the unitary one in terms of system identification accuracy. However, the existing nonunitary JD algorithms are prone to converge to degenerate (even singular) solutions, which result in deteriorated identification performance. Moreover, the existing algorithms usually seek a square diagonalizing matrix, which greatly limits their application in overdetermined system identification scenario. In order to overcome these drawbacks, we reformulate the nonunitary JD as a multicriteria optimization model. The resulting algorithm can converges to a nonsquare well-conditioned diagonalizing matrix.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Stochastic Geometry Modeling of Cellular Networks: A New Definition of Coverage and its Application to Energy Efficiency Optimization.\n \n \n \n \n\n\n \n Di Renzo, M.; Zappone, A.; Lam, T. T.; and Debbah, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1507-1511, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"StochasticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553134,\n  author = {M. {Di Renzo} and A. Zappone and T. T. Lam and M. Debbah},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Stochastic Geometry Modeling of Cellular Networks: A New Definition of Coverage and its Application to Energy Efficiency Optimization},\n  year = {2018},\n  pages = {1507-1511},\n  abstract = {In this paper, we analyze and optimize the energy efficiency of downlink cellular networks. With the aid of tools from stochastic geometry, we introduce a new closed-form analytical expression of the potential spectral efficiency. Unlike currently available analytical frameworks, the proposed analytical formulation explicitly depends on the transmit power and density of the base stations. This is obtained by generalizing the definition of coverage probability and by taking into account the sensitivity of the receiver not only during the detection of information data, but during the cell association phase as well. Based on the new analytical representation of the potential spectral efficiency, the energy efficiency is formulated in a tractable closed-form expression. The resulting optimization problem is studied and it is mathematically proved that the energy efficiency is a unimodal and strictly pseudo-concave function in the transmit power. Numerical results are illustrated and discussed.},\n  keywords = {cellular radio;concave programming;energy conservation;probability;radio spectrum management;stochastic processes;telecommunication power management;stochastic geometry modeling;energy efficiency optimization;pseudoconcave function;cell association phase;base stations;spectral efficiency;tractable closed-form expression;coverage probability;transmit power;downlink cellular networks;Power demand;Optimization;Mathematical model;Cellular networks;Load modeling;Signal to noise ratio;Bandwidth},\n  doi = {10.23919/EUSIPCO.2018.8553134},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436720.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we analyze and optimize the energy efficiency of downlink cellular networks. With the aid of tools from stochastic geometry, we introduce a new closed-form analytical expression of the potential spectral efficiency. Unlike currently available analytical frameworks, the proposed analytical formulation explicitly depends on the transmit power and density of the base stations. This is obtained by generalizing the definition of coverage probability and by taking into account the sensitivity of the receiver not only during the detection of information data, but during the cell association phase as well. Based on the new analytical representation of the potential spectral efficiency, the energy efficiency is formulated in a tractable closed-form expression. The resulting optimization problem is studied and it is mathematically proved that the energy efficiency is a unimodal and strictly pseudo-concave function in the transmit power. Numerical results are illustrated and discussed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Infant Cry Detection in Adverse Acoustic Environments by Using Deep Neural Networks.\n \n \n \n \n\n\n \n Ferretti, D.; Severini, M.; Principi, E.; Cenci, A.; and Squartini, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 992-996, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"InfantPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553135,\n  author = {D. Ferretti and M. Severini and E. Principi and A. Cenci and S. Squartini},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Infant Cry Detection in Adverse Acoustic Environments by Using Deep Neural Networks},\n  year = {2018},\n  pages = {992-996},\n  abstract = {The amount of time an infant cries in a day helps the medical staff in the evaluation of his/her health conditions. Extracting this information requires a cry detection algorithm able to operate in environments with challenging acoustic conditions, since multiple noise sources, such as interferent cries, medical equipments, and persons may be present. This paper proposes an algorithm for detecting infant cries in such environments. The proposed solution is a multiple stage detection algorithm: the first stage is composed of an eight-channel filter-and-sum beamformer, followed by an Optimally Modified Log-Spectral Amplitude estimator (OMLSA) post-filter for reducing the effect of interferences. The second stage is the Deep Neural Network (DNN) based cry detector, having audio Log-Mel features as inputs. A synthetic dataset mimicking a real neonatal hospital scenario has been created for training the network and evaluating the performance. Additionally, a dataset containing cries acquired in a real neonatology department has been used for assessing the performance in a real scenario. The algorithm has been compared to a popular approach for voice activity detection based on Long-Term Spectral Divergence, and the results show that the proposed solution achieves superior detection performance both on synthetic data and on real data.},\n  keywords = {acoustic signal processing;amplitude estimation;feature extraction;hospitals;neural nets;paediatrics;spectral analysis;speech recognition;voice activity detection;superior detection performance;infant cry detection;adverse acoustic environments;Deep Neural networks;medical staff;health conditions;cry detection algorithm;multiple noise sources;interferent cries;medical equipments;multiple stage detection algorithm;Optimally Modified Log-Spectral Amplitude estimator;Deep Neural Network based cry detector;audio Log-Mel features;neonatal hospital scenario;acoustic conditions;eight-channel filter-and-sum beamformer;Pediatrics;Acoustics;Feature extraction;Convolution;Neural networks;Microphone arrays},\n  doi = {10.23919/EUSIPCO.2018.8553135},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438275.pdf},\n}\n\n
\n
\n\n\n
\n The amount of time an infant cries in a day helps the medical staff in the evaluation of his/her health conditions. Extracting this information requires a cry detection algorithm able to operate in environments with challenging acoustic conditions, since multiple noise sources, such as interferent cries, medical equipments, and persons may be present. This paper proposes an algorithm for detecting infant cries in such environments. The proposed solution is a multiple stage detection algorithm: the first stage is composed of an eight-channel filter-and-sum beamformer, followed by an Optimally Modified Log-Spectral Amplitude estimator (OMLSA) post-filter for reducing the effect of interferences. The second stage is the Deep Neural Network (DNN) based cry detector, having audio Log-Mel features as inputs. A synthetic dataset mimicking a real neonatal hospital scenario has been created for training the network and evaluating the performance. Additionally, a dataset containing cries acquired in a real neonatology department has been used for assessing the performance in a real scenario. The algorithm has been compared to a popular approach for voice activity detection based on Long-Term Spectral Divergence, and the results show that the proposed solution achieves superior detection performance both on synthetic data and on real data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the Most Informative Slice of Bicoherence That Characterizes Resting State Brain Connectivity.\n \n \n \n \n\n\n \n Kandemir, A. L.; and Özkurt, T. E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1382-1386, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553136,\n  author = {A. L. Kandemir and T. E. Özkurt},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On the Most Informative Slice of Bicoherence That Characterizes Resting State Brain Connectivity},\n  year = {2018},\n  pages = {1382-1386},\n  abstract = {Bicoherence is a useful tool to detect nonlinear interactions within the brain with high computational cost. Latest attempts to reduce this computational cost suggest calculating a particular `slice' of the bicoherence matrix. In this study, we investigate the information content of the bicoherence matrix in resting state. We use publicly available Human Connectome Project data in our calculations. We show that the most prominent information of the bicoherence matrix is concentrated on the main diagonal, i.e. f1=f2.},\n  keywords = {biomedical MRI;magnetoencephalography;neurophysiology;informative slice;nonlinear interactions;high computational cost;bicoherence matrix;information content;human connectome project data;resting state brain connectivity;bicoherence;connectivity;quadratic phase coupling;cross-frequency coupling;neural oscillations},\n  doi = {10.23919/EUSIPCO.2018.8553136},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437841.pdf},\n}\n\n
\n
\n\n\n
\n Bicoherence is a useful tool to detect nonlinear interactions within the brain with high computational cost. Latest attempts to reduce this computational cost suggest calculating a particular `slice' of the bicoherence matrix. In this study, we investigate the information content of the bicoherence matrix in resting state. We use publicly available Human Connectome Project data in our calculations. We show that the most prominent information of the bicoherence matrix is concentrated on the main diagonal, i.e. f1=f2.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Parameter Domain Loudness Estimation in Parametric Audio Object Coding.\n \n \n \n \n\n\n \n Paulus, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2469-2473, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ParameterPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553137,\n  author = {J. Paulus},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Parameter Domain Loudness Estimation in Parametric Audio Object Coding},\n  year = {2018},\n  pages = {2469-2473},\n  abstract = {Parametric audio object coding employs principles of informed source separation for obtaining object reconstructions from the mixture signal used in the transport enabling flexible output signal rendering into output scenes unknown at the encoder. Information of the object level in the rendered output is important for loudness and dynamic range control applications, e.g., in broadcast. This paper proposes a method for estimating the object level in an arbitrary output scene based on the downmix signal level that is then projected through the combined un-mixing and rendering matrix. This avoids explicit reconstruction of the objects only for the level estimation offering computational complexity savings. In the evaluations, the proposed method shows a high estimation accuracy with a root-mean squared error of 0.26 LUFS (loudness units relative to full scale) compared to 3.7 L UFS of the baseline with object reconstructions.},\n  keywords = {audio coding;computational complexity;loudness;matrix algebra;mean square error methods;source separation;rendering matrix;object reconstructions;parameter domain loudness estimation;parametric audio object coding;computational complexity;source separation;encoder;root-mean squared error;un-mixing matrix;Decoding;Covariance matrices;Dynamic range;Estimation;Rendering (computer graphics);Audio coding;object-based audio;parametric audio object coding;dynamic range control;loudness;spatial audio object coding;DRC;SAOC},\n  doi = {10.23919/EUSIPCO.2018.8553137},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437031.pdf},\n}\n\n
\n
\n\n\n
\n Parametric audio object coding employs principles of informed source separation for obtaining object reconstructions from the mixture signal used in the transport enabling flexible output signal rendering into output scenes unknown at the encoder. Information of the object level in the rendered output is important for loudness and dynamic range control applications, e.g., in broadcast. This paper proposes a method for estimating the object level in an arbitrary output scene based on the downmix signal level that is then projected through the combined un-mixing and rendering matrix. This avoids explicit reconstruction of the objects only for the level estimation offering computational complexity savings. In the evaluations, the proposed method shows a high estimation accuracy with a root-mean squared error of 0.26 LUFS (loudness units relative to full scale) compared to 3.7 L UFS of the baseline with object reconstructions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Continuous Refocusing for Integral Microscopy with Fourier Plane Recording.\n \n \n \n \n\n\n \n Moreschini, S.; Scrofani, G.; Brcgovic, R.; Saavedra, G.; and Gotchev, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 216-220, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ContinuousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553138,\n  author = {S. Moreschini and G. Scrofani and R. Brcgovic and G. Saavedra and A. Gotchev},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Continuous Refocusing for Integral Microscopy with Fourier Plane Recording},\n  year = {2018},\n  pages = {216-220},\n  abstract = {Integral or light field imaging is an attractive approach in microscopy, as it allows to capture 3D samples in just one shot and explore them later through changing the focus on particular depth planes of interest. However, it requires a compromise between spatial and angular resolution on the 2D sensor recording the microscopic images. A particular setting called Fourier Integral Microscope (FIMic) allows maximizing the spatial resolution for the cost of reducing the angular one. In this work, we propose a technique, which aims at reconstructing the continuous light field from sparse FIMic measurements, thus providing the functionality of continuous refocus on any arbitrary depth plane. Our main tool is the densely-sampled light field reconstruction in shearlet domain specifically tailored for the case of FIMic. The experiments demonstrate that the implemented technique yields better results compared to refocusing sparsely-sampled data.},\n  keywords = {image reconstruction;image representation;image resolution;image sampling;optical microscopy;Fourier integral microscope;densely-sampled light field reconstruction;arbitrary depth plane;continuous refocus;sparse FIMic measurements;continuous light field;microscopic images;angular resolution;spatial resolution;light field imaging;integral microscopy;continuous refocusing;Image reconstruction;Microscopy;Spatial resolution;Shearing;Lenses;Signal processing;refocusing;LF;microscopy;reconstruction;FIMic},\n  doi = {10.23919/EUSIPCO.2018.8553138},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437869.pdf},\n}\n\n
\n
\n\n\n
\n Integral or light field imaging is an attractive approach in microscopy, as it allows to capture 3D samples in just one shot and explore them later through changing the focus on particular depth planes of interest. However, it requires a compromise between spatial and angular resolution on the 2D sensor recording the microscopic images. A particular setting called Fourier Integral Microscope (FIMic) allows maximizing the spatial resolution for the cost of reducing the angular one. In this work, we propose a technique, which aims at reconstructing the continuous light field from sparse FIMic measurements, thus providing the functionality of continuous refocus on any arbitrary depth plane. Our main tool is the densely-sampled light field reconstruction in shearlet domain specifically tailored for the case of FIMic. The experiments demonstrate that the implemented technique yields better results compared to refocusing sparsely-sampled data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Speech Translation System Selecting Target Language by Direction-of-Arrival Information.\n \n \n \n \n\n\n \n Tsujikawa, M.; Okabe, K.; Hanazawa, K.; and Kajikawa, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2315-2319, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553139,\n  author = {M. Tsujikawa and K. Okabe and K. Hanazawa and Y. Kajikawa},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic Speech Translation System Selecting Target Language by Direction-of-Arrival Information},\n  year = {2018},\n  pages = {2315-2319},\n  abstract = {In this paper, we propose an automatic speech translation system that selects its target language on the basis of the direction-of-arrival (DOA) information. The system uses two microphones to detect speech signals arriving from specific directions. The target language for speech recognition is selected on the basis of the DOA. Both the speech detection and target language selection relieves users from operations normally required for individual utterances, without serious increase in computational costs. In a speech-recognition evaluation of the proposed system, 80 % word accuracy was achieved for utterances recorded with two microphones that were 40cm distant from speaker positions. This accuracy is nearly equivalent to that in which the time frame and target language of a user's speech are given in advance.},\n  keywords = {direction-of-arrival estimation;language translation;microphones;speaker recognition;speech recognition;automatic speech translation system;direction-of-arrival information;speech-recognition evaluation;speech detection;speech signals;Speech recognition;Voice activity detection;Microphones;Interference;Noise measurement;Loudspeakers;Noise robustness;automatic speech translation;speech recognition;language identification;direction of arrival;speech detection;microphone array},\n  doi = {10.23919/EUSIPCO.2018.8553139},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437018.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an automatic speech translation system that selects its target language on the basis of the direction-of-arrival (DOA) information. The system uses two microphones to detect speech signals arriving from specific directions. The target language for speech recognition is selected on the basis of the DOA. Both the speech detection and target language selection relieves users from operations normally required for individual utterances, without serious increase in computational costs. In a speech-recognition evaluation of the proposed system, 80 % word accuracy was achieved for utterances recorded with two microphones that were 40cm distant from speaker positions. This accuracy is nearly equivalent to that in which the time frame and target language of a user's speech are given in advance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimating Power Spectral Density of Unmanned Aerial Vehicle Rotor Noise Using Multisensory Information.\n \n \n \n \n\n\n \n Yen, B.; Hioka, Y.; and Mace, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2434-2438, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EstimatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553140,\n  author = {B. Yen and Y. Hioka and B. Mace},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimating Power Spectral Density of Unmanned Aerial Vehicle Rotor Noise Using Multisensory Information},\n  year = {2018},\n  pages = {2434-2438},\n  abstract = {A method to accurately estimate the power spectral density (PSD) of an unmanned aerial vehicle (UAV) is proposed, in anticipation of being used for a UAV-mounted audio recording system that clearly captures target sound while suppressing rotor noise. The method utilises UAV rotor characteristics as well as microphone recorded signals to combat practical limitations seen in a previous study. The proposed method was evaluated on a simulation platform modelled after the UAV used in the previous study. Results showed that the proposed method was able to estimate the rotor noise PSD to within 1.3-3.3 dB log spectral distortion (LSD) regardless of the presence of surrounding sound sources.},\n  keywords = {autonomous aerial vehicles;density measurement;power measurement;rotors;sensor fusion;unmanned aerial vehicle rotor noise;multisensory information;UAV-mounted audio recording system;UAV rotor characteristics;power spectral density estimation;rotor noise PSD estimation;rotor noise suppression;microphone;signal recording;log spectral distortion;LSD;sound sources;noise figure 1.3 dB to 3.3 dB;Rotors;Unmanned aerial vehicles;Array signal processing;Audio recording;Microphone arrays;Microphone array;unmanned aerial vehicle;source enhancement;power spectral density;rotor noise},\n  doi = {10.23919/EUSIPCO.2018.8553140},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433053.pdf},\n}\n\n
\n
\n\n\n
\n A method to accurately estimate the power spectral density (PSD) of an unmanned aerial vehicle (UAV) is proposed, in anticipation of being used for a UAV-mounted audio recording system that clearly captures target sound while suppressing rotor noise. The method utilises UAV rotor characteristics as well as microphone recorded signals to combat practical limitations seen in a previous study. The proposed method was evaluated on a simulation platform modelled after the UAV used in the previous study. Results showed that the proposed method was able to estimate the rotor noise PSD to within 1.3-3.3 dB log spectral distortion (LSD) regardless of the presence of surrounding sound sources.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech Dereverberation Using Fully Convolutional Networks.\n \n \n \n \n\n\n \n Ernst, O.; Chazan, S. E.; Gannot, S.; and Goldberger, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 390-394, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpeechPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553141,\n  author = {O. Ernst and S. E. Chazan and S. Gannot and J. Goldberger},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Speech Dereverberation Using Fully Convolutional Networks},\n  year = {2018},\n  pages = {390-394},\n  abstract = {Speech derverberation using a single microphone is addressed in this paper. Motivated by the recent success of the fully convolutional networks (FCN) in many image processing applications, we investigate their applicability to enhance the speech signal represented by short-time Fourier transform (STFT) images. We present two variations: a “U-Net” which is an encoder-decoder network with skip connections and a generative adversarial network (GAN) with U-Net as generator, which yields a more intuitive cost function for training. To evaluate our method we used the data from the REVERB challenge, and compared our results to other methods under the same conditions. We have found that our method outperforms the competing methods in most cases.},\n  keywords = {decoding;Fourier transforms;learning (artificial intelligence);network coding;reverberation;speech coding;STFT imaging;FCN;speech signal representation enhancement;short-time Fourier transform imaging;GAN;U-Net generator;REVERB challenge;image processing applications;single microphone;speech derverberation;fully convolutional networks;generative adversarial network;encoder-decoder network;Spectrogram;Generative adversarial networks;Speech enhancement;Gallium nitride;Reverberation;Task analysis;Computer architecture},\n  doi = {10.23919/EUSIPCO.2018.8553141},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437413.pdf},\n}\n\n
\n
\n\n\n
\n Speech derverberation using a single microphone is addressed in this paper. Motivated by the recent success of the fully convolutional networks (FCN) in many image processing applications, we investigate their applicability to enhance the speech signal represented by short-time Fourier transform (STFT) images. We present two variations: a “U-Net” which is an encoder-decoder network with skip connections and a generative adversarial network (GAN) with U-Net as generator, which yields a more intuitive cost function for training. To evaluate our method we used the data from the REVERB challenge, and compared our results to other methods under the same conditions. We have found that our method outperforms the competing methods in most cases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Small variance asymptotics and bayesian nonparametrics for dictionary learning.\n \n \n \n \n\n\n \n Elvira, C.; Dang, H.; and Chainais, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1607-1611, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SmallPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553142,\n  author = {C. Elvira and H. Dang and P. Chainais},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Small variance asymptotics and bayesian nonparametrics for dictionary learning},\n  year = {2018},\n  pages = {1607-1611},\n  abstract = {Bayesian nonparametric (BNP) is an appealing framework to infer the complexity of a model along with the parameters. To this aim, sampling or variational methods are often used for inference. However, these methods come with numerical disadvantages for large-scale data. An alternative approach is to relax the probabilistic model into a non-probabilistic formulation which yields a scalable algorithm. One limitation of BNP approaches can be the cost of Monte-Carlo sampling for inference. Small-variance asymptotic (SVA) approaches paves the way to much cheaper though approximate methods for inference by taking benefit from a fruitful interaction between Bayesian models and optimization algorithms. In brief, SVA lets the variance of the noise (or residual error) distribution tend to zero in the optimization problem corresponding to a MAP estimator with finite noise variance for instance. We propose such an SVA analysis of a BNP dictionary learning (DL) approach that automatically adapts the size of the dictionary or the subspace dimension in an efficient way. Numerical experiments illustrate the efficiency of the proposed method.},\n  keywords = {approximation theory;Bayes methods;belief networks;learning (artificial intelligence);Monte Carlo methods;noise;optimisation;Bayesian nonparametric;variance asymptotics;numerical experiments;BNP dictionary learning approach;SVA analysis;finite noise variance;optimization problem;optimization algorithms;Bayesian models;approximate methods;small-variance asymptotic approaches;Monte-Carlo sampling;BNP approaches;scalable algorithm;nonprobabilistic formulation;probabilistic model;large-scale data;inference;variational methods;appealing framework;Bayes methods;Signal processing algorithms;Machine learning;Numerical models;Optimization;Dictionaries;Adaptation models;Bayesian nonparametrics;small variance asymptotic;Indian Buffet Process;sparse representations;dictionary learning;inverse problems},\n  doi = {10.23919/EUSIPCO.2018.8553142},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437823.pdf},\n}\n\n
\n
\n\n\n
\n Bayesian nonparametric (BNP) is an appealing framework to infer the complexity of a model along with the parameters. To this aim, sampling or variational methods are often used for inference. However, these methods come with numerical disadvantages for large-scale data. An alternative approach is to relax the probabilistic model into a non-probabilistic formulation which yields a scalable algorithm. One limitation of BNP approaches can be the cost of Monte-Carlo sampling for inference. Small-variance asymptotic (SVA) approaches paves the way to much cheaper though approximate methods for inference by taking benefit from a fruitful interaction between Bayesian models and optimization algorithms. In brief, SVA lets the variance of the noise (or residual error) distribution tend to zero in the optimization problem corresponding to a MAP estimator with finite noise variance for instance. We propose such an SVA analysis of a BNP dictionary learning (DL) approach that automatically adapts the size of the dictionary or the subspace dimension in an efficient way. Numerical experiments illustrate the efficiency of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Neural-Network Supervised Maximum Likelihood-based on-line Dereverberation.\n \n \n \n \n\n\n \n Mosayyebpour, S.; and Nesta, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1552-1556, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Neural-NetworkPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553143,\n  author = {S. Mosayyebpour and F. Nesta},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Neural-Network Supervised Maximum Likelihood-based on-line Dereverberation},\n  year = {2018},\n  pages = {1552-1556},\n  abstract = {In this paper, a new online multiple-input multiple-output (MIMO) approach based on Maximum Likelihood (ML) in subband-domain for dereverberation is proposed. Multichannel linear prediction filters are estimated to blindly shorten the Room Impulse Responses (RIRs) between a set of unknown number of sources and a microphone array. The adaptive filter is updated using a modified weighted recursive Least Squares (RLS). To speed up convergence and minimize the influence of noise, the adaptive algorithm is supervised by a trained Deep Neural Network (DNN) which predicts the source dominance. In our experiments, it is proved that the proposed method can largely reduce the effect of reverberation in high non-stationary noisy conditions and sensibly improve automatic speech recognition performance in far-field and high reverberation.},\n  keywords = {adaptive filters;filtering theory;least squares approximations;maximum likelihood estimation;microphone arrays;neural nets;reverberation;transient response;multiple-input multiple-output approach;subband-domain;multichannel linear prediction filters;adaptive filter;adaptive algorithm;room impulse responses;trained deep neural network;modified weighted recursive least squares;neural-network supervised maximum likelihood-based on-line dereverberation;RLS;ML;DNN;RIR;Reverberation;Noise measurement;Cost function;Estimation;Convergence;Training;multiple-input multiple-output (MIMO);Maximum Likelihood (ML);dereverberation;recursive Least Squares (RLS);Deep Neural Network (DNN)},\n  doi = {10.23919/EUSIPCO.2018.8553143},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436916.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a new online multiple-input multiple-output (MIMO) approach based on Maximum Likelihood (ML) in subband-domain for dereverberation is proposed. Multichannel linear prediction filters are estimated to blindly shorten the Room Impulse Responses (RIRs) between a set of unknown number of sources and a microphone array. The adaptive filter is updated using a modified weighted recursive Least Squares (RLS). To speed up convergence and minimize the influence of noise, the adaptive algorithm is supervised by a trained Deep Neural Network (DNN) which predicts the source dominance. In our experiments, it is proved that the proposed method can largely reduce the effect of reverberation in high non-stationary noisy conditions and sensibly improve automatic speech recognition performance in far-field and high reverberation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Model Based Estimation of STP Parameters for Binaural Speech Enhancement.\n \n \n \n \n\n\n \n Kavalekalam, M. S.; Nielsen, J. K.; Christensen, M. G.; and Boldt, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2479-2483, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ModelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553145,\n  author = {M. S. Kavalekalam and J. K. Nielsen and M. G. Christensen and J. Boldt},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Model Based Estimation of STP Parameters for Binaural Speech Enhancement},\n  year = {2018},\n  pages = {2479-2483},\n  abstract = {This paper deals with the estimation of the short-term predictor (STP) parameters of speech and noise in a binaural framework. A binaural model based approach is proposed for estimating the power spectral density (PSD) of speech and noise at the individual ears for an arbitrary position of the speech source. The estimated PSDs can be subsequently used for enhancement in a binaural framework. The experimental results show that taking into account the position of the speech source using the proposed method leads to improved modelling and enhancement of the noisy speech.},\n  keywords = {estimation theory;parameter estimation;speech enhancement;binaural speech enhancement;binaural model based approach;speech source;power spectral density estimation;PSD estimation;noisy speech enhancement;STP parameter estimation;short-term predictor parameter estimation;Ear;Speech enhancement;Maximum likelihood estimation;Noise measurement;Speech coding;autoregressive modelling;binaural speech enhancement},\n  doi = {10.23919/EUSIPCO.2018.8553145},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437186.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the estimation of the short-term predictor (STP) parameters of speech and noise in a binaural framework. A binaural model based approach is proposed for estimating the power spectral density (PSD) of speech and noise at the individual ears for an arbitrary position of the speech source. The estimated PSDs can be subsequently used for enhancement in a binaural framework. The experimental results show that taking into account the position of the speech source using the proposed method leads to improved modelling and enhancement of the noisy speech.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Online Generalized Possibilistic Clustering Algorithm for Big Data Processing.\n \n \n \n \n\n\n \n Xenaki, S. D.; Koutroumbas, K. D.; and Rontogiannis, A. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2628-2632, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553146,\n  author = {S. D. Xenaki and K. D. Koutroumbas and A. A. Rontogiannis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Online Generalized Possibilistic Clustering Algorithm for Big Data Processing},\n  year = {2018},\n  pages = {2628-2632},\n  abstract = {In this paper a novel efficient online possibilistic c-means clustering algorithm, called Online Generalized Adaptive Possibilistic C-Means (O-GAPCM), is presented. The algorithm extends the abilities of the Adaptive Possibilistic C-Means (APCM) algorithm, allowing the study of cases where the data form compact and hyper-ellipsoidally shaped clusters in the feature space. In addition, the algorithm performs online processing, that is the data vectors are processed one-by-one and their impact is memorized to suitably defined parameters. It also embodies new procedures for creating new clusters and merging existing ones. Thus, O-GAPCM is able to unravel on its own the number and the actual hyper-ellipsoidal shape of the physical clusters formed by the data. Experimental results verify the effectiveness of O-GAPCM both in terms of accuracy and time efficiency.},\n  keywords = {Big Data;pattern clustering;possibility theory;vectors;O-GAPCM;physical clusters;big data processing;data vectors;online generalized adaptive possibilistic c-means;online generalized possibilistic clustering;APCM algorithm;hyper-ellipsoidally shaped clusters;feature space;Clustering algorithms;Signal processing algorithms;Phase change materials;Europe;Shape;Signal processing;Merging;possibilistic clustering;online clustering;parameter adaptivity;hyperspectral imaging},\n  doi = {10.23919/EUSIPCO.2018.8553146},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439026.pdf},\n}\n\n
\n
\n\n\n
\n In this paper a novel efficient online possibilistic c-means clustering algorithm, called Online Generalized Adaptive Possibilistic C-Means (O-GAPCM), is presented. The algorithm extends the abilities of the Adaptive Possibilistic C-Means (APCM) algorithm, allowing the study of cases where the data form compact and hyper-ellipsoidally shaped clusters in the feature space. In addition, the algorithm performs online processing, that is the data vectors are processed one-by-one and their impact is memorized to suitably defined parameters. It also embodies new procedures for creating new clusters and merging existing ones. Thus, O-GAPCM is able to unravel on its own the number and the actual hyper-ellipsoidal shape of the physical clusters formed by the data. Experimental results verify the effectiveness of O-GAPCM both in terms of accuracy and time efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved Adaptive Importance Sampling Based on Variational Inference.\n \n \n \n \n\n\n \n Dowling, M.; Nassar, J.; Djurić, P. M.; and Bugallo, M. F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1632-1636, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553147,\n  author = {M. Dowling and J. Nassar and P. M. Djurić and M. F. Bugallo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved Adaptive Importance Sampling Based on Variational Inference},\n  year = {2018},\n  pages = {1632-1636},\n  abstract = {In Monte Carlo-based Bayesian inference, it is important to generate samples from a target distribution, which are then used, e.g., to compute expectations with respect to the target distribution. Quite often, the target distribution is the posterior of parameters of interest, and drawing samples from it can be exceedingly difficult. Monte Carlo-based methods, like adaptive importance sampling (AIS), is built on the importance sampling principle to approximate a target distribution using a set of samples and their corresponding weights. Variational inference (VI) attempts to approximate the posterior by minimizing the Kullback-Leibler divergence (KLD) between the posterior and a set of simpler parametric distributions. While AIS often performs well, it struggles to approximate multimodal distributions and suffers when applied to high dimensional problems. By contrast, VI is fast and scales well with the dimension, but typically underestimates the variance of the target distribution. In this paper, we combine both methods to overcome their individual drawbacks and create an efficient and robust novel technique for drawing better samples from a target distribution. Our contribution is two-fold. First, we show how to do a smart initialization of AIS using VI. Second, we propose a method for adapting the parameters of the proposal distributions of the AIS, where the adaptation depends on the performance of the VI step. Computer simulations reveal that the new method improves the performance of the individual methods and shows promise to be applied to challenging scenarios.},\n  keywords = {approximation theory;Bayes methods;importance sampling;inference mechanisms;adaptive importance sampling;variational inference;Monte Carlo-based Bayesian inference;target distribution;drawing samples;AIS;parametric distributions;multimodal distributions;Kullback-Leibler divergence;KLD;high dimensional problems;Artificial intelligence;Monte Carlo methods;Proposals;Signal processing algorithms;Standards;Europe;Signal processing;Adaptive importance sampling;Markov chain Monte Carlo;variational inference;Bayesian inference},\n  doi = {10.23919/EUSIPCO.2018.8553147},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437430.pdf},\n}\n\n
\n
\n\n\n
\n In Monte Carlo-based Bayesian inference, it is important to generate samples from a target distribution, which are then used, e.g., to compute expectations with respect to the target distribution. Quite often, the target distribution is the posterior of parameters of interest, and drawing samples from it can be exceedingly difficult. Monte Carlo-based methods, like adaptive importance sampling (AIS), is built on the importance sampling principle to approximate a target distribution using a set of samples and their corresponding weights. Variational inference (VI) attempts to approximate the posterior by minimizing the Kullback-Leibler divergence (KLD) between the posterior and a set of simpler parametric distributions. While AIS often performs well, it struggles to approximate multimodal distributions and suffers when applied to high dimensional problems. By contrast, VI is fast and scales well with the dimension, but typically underestimates the variance of the target distribution. In this paper, we combine both methods to overcome their individual drawbacks and create an efficient and robust novel technique for drawing better samples from a target distribution. Our contribution is two-fold. First, we show how to do a smart initialization of AIS using VI. Second, we propose a method for adapting the parameters of the proposal distributions of the AIS, where the adaptation depends on the performance of the VI step. Computer simulations reveal that the new method improves the performance of the individual methods and shows promise to be applied to challenging scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Sparse Beamforming for mm Wave Spectrum Sharing Systems.\n \n \n \n\n\n \n Vázquez, M. Á.; Blanco, L.; and Pérez-Neira, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1517-1521, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553148,\n  author = {M. Á. Vázquez and L. Blanco and A. Pérez-Neira},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse Beamforming for mm Wave Spectrum Sharing Systems},\n  year = {2018},\n  pages = {1517-1521},\n  abstract = {This paper deals with the problem of sparse beamforming for interference mitigation in milimiter wave (mm Wave) frequency bands. Multiantenna solutions in mmWave are generally implemented with phase shifters which are known to have good interference rejection capabilities. However, phase shifters are power-demanding components and, depending on their resolution, they require bulky hardware solutions able to accommodate the control lines. On the other hand, the use of switches leads to a cost-efficient alternative able to provide a sufficiently large interference rejection while substantially reducing the hardware cost and power consumption. This work proposes a beamforming scheme able to maximize the signal-to-interference-plus-noise ratio (SINR) assuming that the beamforming weights can only take 0 and 1 values. The resulting optimization problem is a binary quadratic fractional problem which is a difficult non-convex problem. Two optimization approaches are proposed; namely, the semidefinite relaxation and the penalized convex-concave procedure. We show that both techniques behave well in the considered scenarios and their performance is close to the optimization problem upper bound value.},\n  keywords = {array signal processing;concave programming;convex programming;interference suppression;millimetre wave communication;radio spectrum management;nonconvex problem;binary quadratic fractional problem;beamforming weights;signal-to-interference-plus-noise ratio;beamforming scheme;power consumption;control lines;power-demanding components;good interference rejection capabilities;phase shifters;multiantenna solutions;interference mitigation;mm wave spectrum;sparse beamforming;Optimization;Interference;Signal to noise ratio;Array signal processing;Antenna arrays;Phase shifters;Receiving antennas},\n  doi = {10.23919/EUSIPCO.2018.8553148},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper deals with the problem of sparse beamforming for interference mitigation in milimiter wave (mm Wave) frequency bands. Multiantenna solutions in mmWave are generally implemented with phase shifters which are known to have good interference rejection capabilities. However, phase shifters are power-demanding components and, depending on their resolution, they require bulky hardware solutions able to accommodate the control lines. On the other hand, the use of switches leads to a cost-efficient alternative able to provide a sufficiently large interference rejection while substantially reducing the hardware cost and power consumption. This work proposes a beamforming scheme able to maximize the signal-to-interference-plus-noise ratio (SINR) assuming that the beamforming weights can only take 0 and 1 values. The resulting optimization problem is a binary quadratic fractional problem which is a difficult non-convex problem. Two optimization approaches are proposed; namely, the semidefinite relaxation and the penalized convex-concave procedure. We show that both techniques behave well in the considered scenarios and their performance is close to the optimization problem upper bound value.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multichannel Audio Front-End for Far-Field Automatic Speech Recognition.\n \n \n \n \n\n\n \n Chhetri, A.; Hilmes, P.; Kristjansson, T.; Chu, W.; Mansour, M.; Li, X.; and Zhang, X.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1527-1531, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MultichannelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553149,\n  author = {A. Chhetri and P. Hilmes and T. Kristjansson and W. Chu and M. Mansour and X. Li and X. Zhang},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multichannel Audio Front-End for Far-Field Automatic Speech Recognition},\n  year = {2018},\n  pages = {1527-1531},\n  abstract = {Far-field automatic speech recognition (ASR) is a key enabling technology that allows untethered and natural voice interaction between users and Amazon Echo family of products. A key component in realizing far-field ASR on these products is the suite of audio front-end (AFE) algorithms that helps in mitigating acoustic environmental challenges and thereby improving the ASR performance. In this paper, we discuss the key algorithms within the AFE, and we provide insights into how these algorithms help in mitigating the various acoustical challenges for far-field processing. We also provide insights into the audio algorithm architecture adopted for the AFE, and we discuss ongoing and future research.},\n  keywords = {acoustic signal processing;neural nets;speech recognition;audio algorithm architecture;far-field automatic speech recognition;natural voice interaction;far-field ASR;audio front-end algorithms;Amazon echo family;mitigating acoustic environment;multichannel audio front-end;AFE algorithms;deep neural networks;Acoustics;Signal processing algorithms;Array signal processing;Engines;Measurement;Microphone arrays;Beamforming;far-field;AFE;deep neural networks;ASR;Amazon Echo},\n  doi = {10.23919/EUSIPCO.2018.8553149},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570431613.pdf},\n}\n\n
\n
\n\n\n
\n Far-field automatic speech recognition (ASR) is a key enabling technology that allows untethered and natural voice interaction between users and Amazon Echo family of products. A key component in realizing far-field ASR on these products is the suite of audio front-end (AFE) algorithms that helps in mitigating acoustic environmental challenges and thereby improving the ASR performance. In this paper, we discuss the key algorithms within the AFE, and we provide insights into how these algorithms help in mitigating the various acoustical challenges for far-field processing. We also provide insights into the audio algorithm architecture adopted for the AFE, and we discuss ongoing and future research.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Light Field Image Coding using High Order Prediction Training.\n \n \n \n \n\n\n \n Monteiro, R. J. S.; Nunes, P. J. L.; Faria, S. M. M.; and Rodrigues, N. M. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1845-1849, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LightPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553150,\n  author = {R. J. S. Monteiro and P. J. L. Nunes and S. M. M. Faria and N. M. M. Rodrigues},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Light Field Image Coding using High Order Prediction Training},\n  year = {2018},\n  pages = {1845-1849},\n  abstract = {This paper proposes a new method for light field image coding relying on a high order prediction mode based on a training algorithm. The proposed approach is applied as an Intra prediction method based on a two-stage block-wise high order prediction model that supports geometric transformations up to eight degrees of freedom. Light field images comprise an array of micro-images that are related by complex perspective deformations that cannot be efficiently compensated by state-of-the-art image coding techniques, which are usually based on low order translational prediction models. The proposed prediction mode is able to exploit the non-local spatial redundancy introduced by light field image structure and a training algorithm is applied on different micro-images that are available in the reference region aiming at reducing the amount of signaling data sent to the receiver. The training direction that generates the most efficient geometric transformation for the current block is determined in the encoder side and signaled to the decoder using an index. The decoder is therefore able to repeat the high order prediction training to generate the desired geometric transformation. Experimental results show bitrate savings up to 12.57% and 50.03 % relatively to a light field image coding solution based on low order prediction without training and HEVC, respectively.},\n  keywords = {data compression;image coding;redundancy;high order prediction training;high order prediction mode;Intra prediction method;light field image structure;geometric transformation;light field image coding;low order translational prediction;state-of-the-art image coding;two-stage block-wise high order prediction;microimage array;complex perspective deformations;nonlocal spatial redundancy;encoder side;signaling data;Light Field Image Coding;HEVC;High Order Prediction Training;Geometric Transformations},\n  doi = {10.23919/EUSIPCO.2018.8553150},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437362.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a new method for light field image coding relying on a high order prediction mode based on a training algorithm. The proposed approach is applied as an Intra prediction method based on a two-stage block-wise high order prediction model that supports geometric transformations up to eight degrees of freedom. Light field images comprise an array of micro-images that are related by complex perspective deformations that cannot be efficiently compensated by state-of-the-art image coding techniques, which are usually based on low order translational prediction models. The proposed prediction mode is able to exploit the non-local spatial redundancy introduced by light field image structure and a training algorithm is applied on different micro-images that are available in the reference region aiming at reducing the amount of signaling data sent to the receiver. The training direction that generates the most efficient geometric transformation for the current block is determined in the encoder side and signaled to the decoder using an index. The decoder is therefore able to repeat the high order prediction training to generate the desired geometric transformation. Experimental results show bitrate savings up to 12.57% and 50.03 % relatively to a light field image coding solution based on low order prediction without training and HEVC, respectively.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Generation of Random Signals with Prescribed Probability Distribution and Spectral Bandwidth via Ergodic Transformations.\n \n \n \n \n\n\n \n McDonald, A. M.; and van Wyk , M. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 331-335, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553152,\n  author = {A. M. McDonald and M. A. {van Wyk}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Generation of Random Signals with Prescribed Probability Distribution and Spectral Bandwidth via Ergodic Transformations},\n  year = {2018},\n  pages = {331-335},\n  abstract = {A novel random signal generator design that accommodates the specification of both the sample probability distribution as well as the signal bandwidth is presented in this paper. The generator achieves a high degree of computational efficiency through the nonlinear transformation of trajectories produced by a discrete-time dynamical system which has an ergodic map as evolution rule. The ergodic map is designed using a recently proposed solution of the inverse Frobenius-Perron problem that allows for the selection of the map's invariant distribution as well as its spectral characteristics. The nonlinear transformation is obtained via a novel piecewise polynomial fitting algorithm, which facilitates the approximation of absolutely continuous probability distributions over compact support with greater accuracy than existing techniques. Numerical experiments indicate that the proposed design achieves a reduction in signal generation time of up to 22 % compared to a conventional generator, while at the same time using a smaller lookup table, maintaining a comparable level of accuracy, and offering flexibility in the selection of the signal bandwidth. It is concluded that the proposed approach is suitable for signal generation in applications where low computational complexity is a critical requirement.},\n  keywords = {chaos;computational complexity;polynomials;signal generators;statistical distributions;table lookup;inverse Frobenius-Perron problem;spectral characteristics;nonlinear transformation;absolutely continuous probability distributions;signal generation time;signal bandwidth;random signals;spectral bandwidth;ergodic transformations;high degree;computational efficiency;discrete-time dynamical system;ergodic map;evolution rule;probability distribution;random signal generator design;piecewise polynomial fitting algorithm;Generators;Signal generators;Bandwidth;Eigenvalues and eigenfunctions;Markov processes;Probability distribution;Approximation algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553152},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439331.pdf},\n}\n\n
\n
\n\n\n
\n A novel random signal generator design that accommodates the specification of both the sample probability distribution as well as the signal bandwidth is presented in this paper. The generator achieves a high degree of computational efficiency through the nonlinear transformation of trajectories produced by a discrete-time dynamical system which has an ergodic map as evolution rule. The ergodic map is designed using a recently proposed solution of the inverse Frobenius-Perron problem that allows for the selection of the map's invariant distribution as well as its spectral characteristics. The nonlinear transformation is obtained via a novel piecewise polynomial fitting algorithm, which facilitates the approximation of absolutely continuous probability distributions over compact support with greater accuracy than existing techniques. Numerical experiments indicate that the proposed design achieves a reduction in signal generation time of up to 22 % compared to a conventional generator, while at the same time using a smaller lookup table, maintaining a comparable level of accuracy, and offering flexibility in the selection of the signal bandwidth. It is concluded that the proposed approach is suitable for signal generation in applications where low computational complexity is a critical requirement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Electrolaryngeal Speech Enhancement with Statistical Voice Conversion based on CLDNN.\n \n \n \n \n\n\n \n Kobayashi, K.; and Toda, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2115-2119, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ElectrolaryngealPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553154,\n  author = {K. Kobayashi and T. Toda},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Electrolaryngeal Speech Enhancement with Statistical Voice Conversion based on CLDNN},\n  year = {2018},\n  pages = {2115-2119},\n  abstract = {An electrolarynx (EL) is a widely used device to mechanically generate excitation signals, making it possible for laryngectomees to produce EL speech without vocal fold vibrations. Although EL speech sounds relatively intelligible, is significantly less natural than normal speech owing to its mechanical excitation signals. To address this issue, a statistical voice conversion (VC) technique based on Gaussian mixture models (GMMs) has been applied to EL speech enhancement. In this technique, input EL speech is converted into target normal speech by converting spectral features of the EL speech into spectral and excitation parameters of normal speech using GMMs. Although this technique makes it possible to significantly improve the naturalness of EL speech, the enhanced EL speech is still far from the target normal speech. To improve the performance of statistical EL speech enhancement, in this paper, we propose an EL-to-speech conversion method based on CLDNNs consisting of convolutional layers, long short-term memory recurrent layers, and fully connected deep neural network layers. Three CLDNNs are trained, one to convert EL speech spectral features into spectral and band-aperiodicity parameters, one to convert them into unvoiced/voiced symbols, and one to convert them into continuous F0 patterns. The experimental results demonstrate that the proposed method significantly outperforms the conventional method in terms of both objective evaluation metrics and subjective evaluation scores.},\n  keywords = {Gaussian processes;mixture models;recurrent neural nets;speech enhancement;statistical analysis;electrolaryngeal speech enhancement;statistical voice conversion technique;statistical EL speech enhancement;EL-to-speech conversion method;EL speech spectral features;CLDNN;laryngectomees;vocal fold vibrations;mechanical excitation signals;statistical VC technique;Gaussian mixture models;GMM;long short-term memory recurrent layers;deep neural network layers;band-aperiodicity parameters;spectral-aperiodicity parameters;continuous F0 patterns;Speech enhancement;Acoustics;Feature extraction;Natural languages;Training;Europe;electrolaryngeal speech;statistical voice conversion;speech enhancement;deep neural network},\n  doi = {10.23919/EUSIPCO.2018.8553154},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439378.pdf},\n}\n\n
\n
\n\n\n
\n An electrolarynx (EL) is a widely used device to mechanically generate excitation signals, making it possible for laryngectomees to produce EL speech without vocal fold vibrations. Although EL speech sounds relatively intelligible, is significantly less natural than normal speech owing to its mechanical excitation signals. To address this issue, a statistical voice conversion (VC) technique based on Gaussian mixture models (GMMs) has been applied to EL speech enhancement. In this technique, input EL speech is converted into target normal speech by converting spectral features of the EL speech into spectral and excitation parameters of normal speech using GMMs. Although this technique makes it possible to significantly improve the naturalness of EL speech, the enhanced EL speech is still far from the target normal speech. To improve the performance of statistical EL speech enhancement, in this paper, we propose an EL-to-speech conversion method based on CLDNNs consisting of convolutional layers, long short-term memory recurrent layers, and fully connected deep neural network layers. Three CLDNNs are trained, one to convert EL speech spectral features into spectral and band-aperiodicity parameters, one to convert them into unvoiced/voiced symbols, and one to convert them into continuous F0 patterns. The experimental results demonstrate that the proposed method significantly outperforms the conventional method in terms of both objective evaluation metrics and subjective evaluation scores.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Machine Learning Based Indoor Localization Using a Representative k-Nearest-Neighbor Classifier on a Low-Cost IoT-Hardware.\n \n \n \n \n\n\n \n Dziubany, M.; Machhamer, R.; Laux, H.; Schmeink, A.; Gollmer, K.; Burger, G.; and Dartmann, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2050-2054, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MachinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553155,\n  author = {M. Dziubany and R. Machhamer and H. Laux and A. Schmeink and K. Gollmer and G. Burger and G. Dartmann},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Machine Learning Based Indoor Localization Using a Representative k-Nearest-Neighbor Classifier on a Low-Cost IoT-Hardware},\n  year = {2018},\n  pages = {2050-2054},\n  abstract = {In order to make Internet of Things (IoT) applications easily available and cheap, simple sensors and devices have to be offered. To make this possible, our vision is to use simple hardware for measurements and to put more effort in the signal processing and data analysis to the cloud. In this paper, we present a machine learning algorithm and a simple technical implementation on a hardware platform for the localization of a low accuracy microphone via room impulse response. We give a proof-of-concept via a field test by localization of multiple positions of the IoT device. The field test shows that the recorded signals from the same source are unique at any position in a room due to unique reflections. In contrast to other methods, there is no need for high accuracy microphone arrays, however, at the expanse of multiple measurements and training samples. Our representative k-nearest-neighbor algorithm (RKNN) classifies a recording using a k-nearest-neighbor method (KNN) after choosing representatives for the KNN classifier, which reduces computing time and memory of the KNN classifier.},\n  keywords = {data analysis;Internet of Things;learning (artificial intelligence);microphone arrays;pattern classification;Internet of Things applications;technical implementation;KNN classifier;multiple measurements;high accuracy microphone arrays;IoT device;multiple positions;proof-of-concept;room impulse response;low accuracy microphone;hardware platform;machine learning algorithm;data analysis;signal processing;low-cost IoT-hardware;representative k-nearest-neighbor classifier;indoor localization;Training data;Hardware;Microphones;Signal processing algorithms;Internet of Things;Signal processing;Machine learning algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553155},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437747.pdf},\n}\n\n
\n
\n\n\n
\n In order to make Internet of Things (IoT) applications easily available and cheap, simple sensors and devices have to be offered. To make this possible, our vision is to use simple hardware for measurements and to put more effort in the signal processing and data analysis to the cloud. In this paper, we present a machine learning algorithm and a simple technical implementation on a hardware platform for the localization of a low accuracy microphone via room impulse response. We give a proof-of-concept via a field test by localization of multiple positions of the IoT device. The field test shows that the recorded signals from the same source are unique at any position in a room due to unique reflections. In contrast to other methods, there is no need for high accuracy microphone arrays, however, at the expanse of multiple measurements and training samples. Our representative k-nearest-neighbor algorithm (RKNN) classifies a recording using a k-nearest-neighbor method (KNN) after choosing representatives for the KNN classifier, which reduces computing time and memory of the KNN classifier.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-Time Deep Learning Method for Abandoned Luggage Detection in Video.\n \n \n \n \n\n\n \n Smeureanu, S.; and Ionescu, R. T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1775-1779, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Real-TimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553156,\n  author = {S. Smeureanu and R. T. Ionescu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-Time Deep Learning Method for Abandoned Luggage Detection in Video},\n  year = {2018},\n  pages = {1775-1779},\n  abstract = {Recent terrorist attacks in major cities around the world have brought many casualties among innocent citizens. One potential threat is represented by abandoned luggage items (that could contain bombs or biological warfare) in public areas. In this paper, we describe an approach for real-time automatic detection of abandoned luggage in video captured by surveillance cameras. The approach is comprised of two stages: (i) static object detection based on background subtraction and motion estimation and (ii) abandoned luggage recognition based on a cascade of convolutional neural networks (CNN). To train our neural networks we provide two types of examples: images collected from the Internet and realistic examples generated by imposing various suitcases and bags over the scene's background. We present empirical results demonstrating that our approach yields better performance than a strong CNN baseline method.},\n  keywords = {government data processing;learning (artificial intelligence);motion estimation;neural nets;object detection;terrorism;video surveillance;abandoned luggage detection;abandoned luggage items;biological warfare;public areas;surveillance cameras;static object detection;background subtraction;motion estimation;abandoned luggage recognition;convolutional neural networks;terrorist attacks;CNN baseline method;Object detection;Convolutional neural networks;Cameras;Real-time systems;Pipelines;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553156},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435791.pdf},\n}\n\n
\n
\n\n\n
\n Recent terrorist attacks in major cities around the world have brought many casualties among innocent citizens. One potential threat is represented by abandoned luggage items (that could contain bombs or biological warfare) in public areas. In this paper, we describe an approach for real-time automatic detection of abandoned luggage in video captured by surveillance cameras. The approach is comprised of two stages: (i) static object detection based on background subtraction and motion estimation and (ii) abandoned luggage recognition based on a cascade of convolutional neural networks (CNN). To train our neural networks we provide two types of examples: images collected from the Internet and realistic examples generated by imposing various suitcases and bags over the scene's background. We present empirical results demonstrating that our approach yields better performance than a strong CNN baseline method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Phylogenetic Analysis of Multimedia Codec Software.\n \n \n \n \n\n\n \n Verde, S.; Milani, S.; and Calvagno, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1427-1431, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PhylogeneticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553158,\n  author = {S. Verde and S. Milani and G. Calvagno},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Phylogenetic Analysis of Multimedia Codec Software},\n  year = {2018},\n  pages = {1427-1431},\n  abstract = {The phylogenetic analysis of multimedia contents (texts, images, videos and audio tracks) has been extensively investigated during the last years. Nevertheless, no research works have considered reconstructing the evolution and the dependencies among different releases of a given software. The current paper presents a first attempt in this direction considering different image and video codecs. The proposed solution codes a set of input images and videos, builds a dissimilarity matrix from the resulting coding artifacts, and then estimates the corresponding Software Phylogenetic Tree (SPT) using a minimum spanning tree algorithm. Experimental results show that the obtained accuracy is quite promising for different configurations.},\n  keywords = {codecs;multimedia systems;trees (mathematics);video codecs;phylogenetic analysis;multimedia contents;audio tracks;video codecs;solution codes;input images;minimum spanning tree algorithm;multimedia codec software;software phylogenetic tree;Software;Phylogeny;Image reconstruction;Codecs;Measurement;Streaming media;Transform coding;phylogeny;video coding;image coding;software identification;multimedia forensics},\n  doi = {10.23919/EUSIPCO.2018.8553158},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439025.pdf},\n}\n\n
\n
\n\n\n
\n The phylogenetic analysis of multimedia contents (texts, images, videos and audio tracks) has been extensively investigated during the last years. Nevertheless, no research works have considered reconstructing the evolution and the dependencies among different releases of a given software. The current paper presents a first attempt in this direction considering different image and video codecs. The proposed solution codes a set of input images and videos, builds a dissimilarity matrix from the resulting coding artifacts, and then estimates the corresponding Software Phylogenetic Tree (SPT) using a minimum spanning tree algorithm. Experimental results show that the obtained accuracy is quite promising for different configurations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimizing Approximate Message Passing for Variable Measurement Noise.\n \n \n \n \n\n\n \n Birgmeier, S. C.; and Goertz, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 484-488, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OptimizingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553159,\n  author = {S. C. Birgmeier and N. Goertz},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimizing Approximate Message Passing for Variable Measurement Noise},\n  year = {2018},\n  pages = {484-488},\n  abstract = {The standard Approximate Message Passing (AMP) algorithm optionally considers i.i.d. measurement noise. The governing parameter is the noise variance. When the noise is independent, but not identically distributed, applying AMP with the noise variance parameter set to the average of the actual noise variance results in significantly degraded performance. We propose a modified AMP algorithm called AMP-VN which improves performance for known noise variances.},\n  keywords = {approximation theory;measurement errors;measurement uncertainty;message passing;optimisation;signal processing;variable measurement noise;noise variance parameter;modified AMP algorithm;AMP-VN;approximate message passing algorithm;optimizing approximate message passing;Message passing;Signal processing algorithms;Noise measurement;Europe;Signal processing;Standards;Sparse matrices},\n  doi = {10.23919/EUSIPCO.2018.8553159},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437287.pdf},\n}\n\n
\n
\n\n\n
\n The standard Approximate Message Passing (AMP) algorithm optionally considers i.i.d. measurement noise. The governing parameter is the noise variance. When the noise is independent, but not identically distributed, applying AMP with the noise variance parameter set to the average of the actual noise variance results in significantly degraded performance. We propose a modified AMP algorithm called AMP-VN which improves performance for known noise variances.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n PRNU-based Image Classification of Origin Social Network with CNN.\n \n \n \n \n\n\n \n Caldelli, R.; Amerini, I.; and Li, C. T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1357-1361, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PRNU-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553160,\n  author = {R. Caldelli and I. Amerini and C. T. Li},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {PRNU-based Image Classification of Origin Social Network with CNN},\n  year = {2018},\n  pages = {1357-1361},\n  abstract = {A huge amount of images are continuously shared on social networks (SNs) daily and, in most of cases, it is very difficult to reliably establish the SN of provenance of an image when it is recovered from a hard disk, a SD card or a smartphone memory. During an investigation, it could be crucial to be able to distinguish images coming directly from a photo-camera with respect to those downloaded from a social network and possibly, in this last circumstance, determining which is the SN among a defined group. It is well known that each SN leaves peculiar traces on each content during the upload-download process; such traces can be exploited to make image classification. In this work, the idea is to use the PRNU, embedded in every acquired images, as the “carrier” of the particular SN traces which diversely modulate the PRNU. We demonstrate, in this paper, that SN-modulated noise residual can be adopted as a feature to detect the social network of origin by means of a trained convolutional neural network (CNN).},\n  keywords = {cameras;convolution;feedforward neural nets;image classification;object detection;smart phones;social networking (online);SN-modulated noise residual;CNN;PRNU-based image classification;hard disk;upload-download process;convolutional neural network;social network;smartphone memory;images acquisition;Cameras;Transform coding;Facebook;Training;Feature extraction;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553160},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437218.pdf},\n}\n\n
\n
\n\n\n
\n A huge amount of images are continuously shared on social networks (SNs) daily and, in most of cases, it is very difficult to reliably establish the SN of provenance of an image when it is recovered from a hard disk, a SD card or a smartphone memory. During an investigation, it could be crucial to be able to distinguish images coming directly from a photo-camera with respect to those downloaded from a social network and possibly, in this last circumstance, determining which is the SN among a defined group. It is well known that each SN leaves peculiar traces on each content during the upload-download process; such traces can be exploited to make image classification. In this work, the idea is to use the PRNU, embedded in every acquired images, as the “carrier” of the particular SN traces which diversely modulate the PRNU. We demonstrate, in this paper, that SN-modulated noise residual can be adopted as a feature to detect the social network of origin by means of a trained convolutional neural network (CNN).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Residual Encoder-Decoder Network for Semantic Segmentation in Autonomous Driving Scenarios.\n \n \n \n \n\n\n \n Naresh, Y. G.; Little, S.; and O'Connor, N. E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1052-1056, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553161,\n  author = {Y. G. Naresh and S. Little and N. E. O'Connor},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Residual Encoder-Decoder Network for Semantic Segmentation in Autonomous Driving Scenarios},\n  year = {2018},\n  pages = {1052-1056},\n  abstract = {In this paper, we propose an encoder-decoder based deep convolutional network for semantic segmentation in autonomous driving scenarios. The architecture of the proposed model is based on VGG16 [1]. Residual learning is introduced to preserve the context while decreasing the size of feature maps between the stacks of convolutional layers. Also, the resolution is preserved through shortcuts from the encoder stage to the decoder stage. Experiments are conducted on popular benchmark datasets CamVid and CityScapes to demonstrate the efficacy of the proposed model. The experiments are corroborated with comparative analysis with popular encoder-decoder networks such as SegNet and Enet architectures demonstrating that the proposed approach outperforms existing methods despite having fewer trainable parameters.},\n  keywords = {computer vision;decoding;feedforward neural nets;image segmentation;residual encoder-decoder network;semantic segmentation;autonomous driving scenarios;encoder-decoder based deep convolutional network;residual learning;feature maps;convolutional layers;encoder stage;decoder stage;VGG16;benchmark datasets CamVid;CityScapes;Convolution;Semantics;Image segmentation;Decoding;Training;Europe;Autonomous vehicles},\n  doi = {10.23919/EUSIPCO.2018.8553161},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439315.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an encoder-decoder based deep convolutional network for semantic segmentation in autonomous driving scenarios. The architecture of the proposed model is based on VGG16 [1]. Residual learning is introduced to preserve the context while decreasing the size of feature maps between the stacks of convolutional layers. Also, the resolution is preserved through shortcuts from the encoder stage to the decoder stage. Experiments are conducted on popular benchmark datasets CamVid and CityScapes to demonstrate the efficacy of the proposed model. The experiments are corroborated with comparative analysis with popular encoder-decoder networks such as SegNet and Enet architectures demonstrating that the proposed approach outperforms existing methods despite having fewer trainable parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Skeleton-Based Action Recognition Based on Deep Learning and Grassmannian Pyramids.\n \n \n \n \n\n\n \n Konstantinidis, D.; Dimitropoulos, K.; and Daras, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2045-2049, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Skeleton-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553163,\n  author = {D. Konstantinidis and K. Dimitropoulos and P. Daras},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Skeleton-Based Action Recognition Based on Deep Learning and Grassmannian Pyramids},\n  year = {2018},\n  pages = {2045-2049},\n  abstract = {Ahstract- The accuracy of modern depth sensors, the robustness of skeletal data to illumination variations and the superb performance of deep learning techniques on several classification tasks have sparkled a renewed intereste towards skeleton-based action recognition. In this paper, we propose a four-stream deep neural network based on two types of spatial skeletal features and their corresponding temporal representations extracted by the novel Grassmannian Pyramid Descriptor (GPD). The performance of the proposed action recognition methodology is further enhanced by the use of a meta-learner that takes advantage of the meta knowledge extracted from the processing of the different features. Experiments on several well-known action recognition datasets reveal that our proposed methodology outperforms a number of state-of-the-art skeleton-based action recognition methods.},\n  keywords = {feature extraction;image classification;image motion analysis;image representation;learning (artificial intelligence);neural nets;object recognition;modern depth sensors;skeletal data;deep learning techniques;deep neural network;action recognition methodology;action recognition datasets;skeleton-based action recognition methods;novel Grassmannian Pyramid descriptor;temporal representations;spatial skeletal feature extraction;GPD;Skeleton;Three-dimensional displays;Feature extraction;Manifolds;Neural networks;Histograms},\n  doi = {10.23919/EUSIPCO.2018.8553163},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437076.pdf},\n}\n\n
\n
\n\n\n
\n Ahstract- The accuracy of modern depth sensors, the robustness of skeletal data to illumination variations and the superb performance of deep learning techniques on several classification tasks have sparkled a renewed intereste towards skeleton-based action recognition. In this paper, we propose a four-stream deep neural network based on two types of spatial skeletal features and their corresponding temporal representations extracted by the novel Grassmannian Pyramid Descriptor (GPD). The performance of the proposed action recognition methodology is further enhanced by the use of a meta-learner that takes advantage of the meta knowledge extracted from the processing of the different features. Experiments on several well-known action recognition datasets reveal that our proposed methodology outperforms a number of state-of-the-art skeleton-based action recognition methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detecting Adversarial Examples - a Lesson from Multimedia Security.\n \n \n \n \n\n\n \n Schöttle, P.; Schlögl, A.; Pasquini, C.; and Böhme, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 947-951, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DetectingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553164,\n  author = {P. Schöttle and A. Schlögl and C. Pasquini and R. Böhme},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Detecting Adversarial Examples - a Lesson from Multimedia Security},\n  year = {2018},\n  pages = {947-951},\n  abstract = {Adversarial classification is the task of performing robust classification in the presence of a strategic attacker. Originating from information hiding and multimedia forensics, adversarial classification recently received a lot of attention in a broader security context. In the domain of machine learning-based image classification, adversarial classification can be interpreted as detecting so-called adversarial examples, which are slightly altered versions of benign images. They are specifically crafted to be misclassified with a very high probability by the classifier under attack. Neural networks, which dominate among modern image classifiers, have been shown to be especially vulnerable to these adversarial examples. However, detecting subtle changes in digital images has always been the goal of multimedia forensics and steganalysis, two major subfields of multimedia security. We highlight the conceptual similarities between these fields and secure machine learning. Furthermore, we adapt a linear filter, similar to early steganalysis methods, to detect adversarial examples that are generated with the projected gradient descent (PGD) method, the state-of-the-art algorithm for this task. We test our method on the MNIST database and show for several parameter combinations of PGD that our method can reliably detect adversarial examples. Additionally, the combination of adversarial re-training and our detection method effectively reduces the attack surface of attacks against neural networks. Thus, we conclude that adversarial examples for image classification possibly do not withstand detection methods from steganalysis, and future work should explore the effectiveness of known techniques from multimedia security in other adversarial settings.},\n  keywords = {gradient methods;image classification;image forensics;learning (artificial intelligence);multimedia computing;probability;steganography;multimedia security;adversarial classification;information hiding;multimedia forensics;machine learning-based image classification;strategic attacker;adversarial retraining;projected gradient descent method;PGD method;adversarial example detection;linear filter;CNN;convolutional neural networks;Machine learning;Forensics;Security;Neural networks;Distortion;Task analysis;Digital images;Adversarial Classification;Adversarial Examples;Multimedia Forensics;Steaanalysis},\n  doi = {10.23919/EUSIPCO.2018.8553164},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436993.pdf},\n}\n\n
\n
\n\n\n
\n Adversarial classification is the task of performing robust classification in the presence of a strategic attacker. Originating from information hiding and multimedia forensics, adversarial classification recently received a lot of attention in a broader security context. In the domain of machine learning-based image classification, adversarial classification can be interpreted as detecting so-called adversarial examples, which are slightly altered versions of benign images. They are specifically crafted to be misclassified with a very high probability by the classifier under attack. Neural networks, which dominate among modern image classifiers, have been shown to be especially vulnerable to these adversarial examples. However, detecting subtle changes in digital images has always been the goal of multimedia forensics and steganalysis, two major subfields of multimedia security. We highlight the conceptual similarities between these fields and secure machine learning. Furthermore, we adapt a linear filter, similar to early steganalysis methods, to detect adversarial examples that are generated with the projected gradient descent (PGD) method, the state-of-the-art algorithm for this task. We test our method on the MNIST database and show for several parameter combinations of PGD that our method can reliably detect adversarial examples. Additionally, the combination of adversarial re-training and our detection method effectively reduces the attack surface of attacks against neural networks. Thus, we conclude that adversarial examples for image classification possibly do not withstand detection methods from steganalysis, and future work should explore the effectiveness of known techniques from multimedia security in other adversarial settings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of Activity States of Local Neuronal Microcircuits in Mouse Brain.\n \n \n \n \n\n\n \n Jin, D.; Boiadjieva, B.; Backhaus, H.; Fauss, M.; Fu, T.; Stroh, A.; Klein, A.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1940-1944, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553165,\n  author = {D. Jin and B. Boiadjieva and H. Backhaus and M. Fauss and T. Fu and A. Stroh and A. Klein and A. M. Zoubir},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of Activity States of Local Neuronal Microcircuits in Mouse Brain},\n  year = {2018},\n  pages = {1940-1944},\n  abstract = {Time series of neuronal activity corresponding to different activity states in mouse brain are analyzed in the time domain and the time-frequency domain. The signals are associated with either a slow wave brain state or a persistent brain state. For both states, characteristic spectral features are identified and a simple detector is proposed that is able to identify the brain state with low latency and high accuracy. In practice, being able to monitor the brain state online and in real time is crucial for improved in vivo experiments and, ultimately, for a causal understanding of brain dynamics.},\n  keywords = {brain;neurophysiology;time series;local neuronal microcircuits;mouse brain;time series;neuronal activity;time domain;time-frequency domain;slow wave brain state;persistent brain state;brain dynamics;activity states;Time-frequency analysis;Feature extraction;Time series analysis;Spectrogram;Transient analysis;Mice;Brain state;neuronal circuits;detection;hypothesis testing;time-frequency analysis},\n  doi = {10.23919/EUSIPCO.2018.8553165},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439024.pdf},\n}\n\n
\n
\n\n\n
\n Time series of neuronal activity corresponding to different activity states in mouse brain are analyzed in the time domain and the time-frequency domain. The signals are associated with either a slow wave brain state or a persistent brain state. For both states, characteristic spectral features are identified and a simple detector is proposed that is able to identify the brain state with low latency and high accuracy. In practice, being able to monitor the brain state online and in real time is crucial for improved in vivo experiments and, ultimately, for a causal understanding of brain dynamics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spatio-Spectral Multichannel Reconstruction from few Low-Resolution Multispectral Data.\n \n \n \n \n\n\n \n Hadj-Youcef, M. A.; Orieux, F.; Fraysse, A.; and Abergel, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1980-1984, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Spatio-SpectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553166,\n  author = {M. A. Hadj-Youcef and F. Orieux and A. Fraysse and A. Abergel},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Spatio-Spectral Multichannel Reconstruction from few Low-Resolution Multispectral Data},\n  year = {2018},\n  pages = {1980-1984},\n  abstract = {This paper deals with the reconstruction of a 3-D spatio-spectral object observed by a multispectral imaging system, where the original object is blurred with a spectral-variant PSF (Point Spread Function) and integrated over few broad spectral bands. In order to tackle this ill-posed problem, we propose a linear forward model that accounts for direct (or auto) channels and between (or cross) channels degradation, by modeling the imaging system response and the spectral distribution of the object with a piecewise linear function. Reconstruction based on regularization method is proposed, by enforcing spatial and spectral smoothness of the object. We test our approach on simulated data of the Mid-InfraRed Instrument (MIRI) Imager of the James Webb Space Telescope (JWST). Results on simulated multispectral data show a significant improvement over the conventional multichannel method.},\n  keywords = {astronomical image processing;astronomical telescopes;image reconstruction;image resolution;image restoration;optical images;optical transfer function;spatio-spectral multichannel reconstruction;low-resolution multispectral data;spatio-spectral object;multispectral imaging system;spectral-variant PSF;Point Spread Function;broad spectral bands;linear forward model;channels degradation;imaging system response;spectral distribution;piecewise linear function;spatial smoothness;spectral smoothness;Mid-InfraRed Instrument Imager;simulated multispectral data;conventional multichannel method;James Webb Space Telescope;Image reconstruction;Degradation;Imaging;Detectors;Optical diffraction;Instruments;Data models;Inverse problems;Image reconstruction;Deconvolution;System modeling;Multispectral restoration},\n  doi = {10.23919/EUSIPCO.2018.8553166},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436502.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the reconstruction of a 3-D spatio-spectral object observed by a multispectral imaging system, where the original object is blurred with a spectral-variant PSF (Point Spread Function) and integrated over few broad spectral bands. In order to tackle this ill-posed problem, we propose a linear forward model that accounts for direct (or auto) channels and between (or cross) channels degradation, by modeling the imaging system response and the spectral distribution of the object with a piecewise linear function. Reconstruction based on regularization method is proposed, by enforcing spatial and spectral smoothness of the object. We test our approach on simulated data of the Mid-InfraRed Instrument (MIRI) Imager of the James Webb Space Telescope (JWST). Results on simulated multispectral data show a significant improvement over the conventional multichannel method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensitivity of the Contactless Videoplethysmography-Based Heart Rate Detection to Different Measurement Conditions.\n \n \n \n \n\n\n \n Gambi, E.; Ricciuti, M.; and Spinsante, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 767-771, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SensitivityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553167,\n  author = {E. Gambi and M. Ricciuti and S. Spinsante},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sensitivity of the Contactless Videoplethysmography-Based Heart Rate Detection to Different Measurement Conditions},\n  year = {2018},\n  pages = {767-771},\n  abstract = {Technologies for contactless Heart Rate measurement support the progress in the diagnostic and healthcare fields, opening new possibilities even for everyday use at home. Among them, Videoplethysmography based on the Eulerian Video Magnification method has been already validated as an effective alternative to traditional, but often bulky, Electrocardiographic acquisitions. In this paper we study the influence of different measurement parameters on the Heart Rate estimation, in order to assess the reliability of the Videoplethysmography detection method under varying conditions, like different dimensions and positions of the processed regions of interest, pyramidal decomposition levels, and light conditions.},\n  keywords = {electrocardiography;health care;medical signal detection;medical signal processing;patient monitoring;video signal processing;diagnostic fields;healthcare fields;light conditions;pyramidal decomposition levels;electrocardiographic acquisitions;Eulerian video magnification method;videoplethysmography detection method;contactless videoplethysmography-based heart rate detection;heart rate estimation;Heart rate;Neck;Estimation;Face;Signal processing;Europe;Biomedical measurement},\n  doi = {10.23919/EUSIPCO.2018.8553167},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438234.pdf},\n}\n\n
\n
\n\n\n
\n Technologies for contactless Heart Rate measurement support the progress in the diagnostic and healthcare fields, opening new possibilities even for everyday use at home. Among them, Videoplethysmography based on the Eulerian Video Magnification method has been already validated as an effective alternative to traditional, but often bulky, Electrocardiographic acquisitions. In this paper we study the influence of different measurement parameters on the Heart Rate estimation, in order to assess the reliability of the Videoplethysmography detection method under varying conditions, like different dimensions and positions of the processed regions of interest, pyramidal decomposition levels, and light conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral Clustering with Automatic Cluster-Number Identification via Finding Sparse Eigenvectors.\n \n \n \n \n\n\n \n Ogino, Y.; and Yukawa, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1187-1191, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553168,\n  author = {Y. Ogino and M. Yukawa},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectral Clustering with Automatic Cluster-Number Identification via Finding Sparse Eigenvectors},\n  year = {2018},\n  pages = {1187-1191},\n  abstract = {Spectral clustering is an empirically successful approach to separating a dataset into some groups with possibly complex shapes based on pairwise affinity. Identifying the number of clusters automatically is still an open issue, although many heuristics have been proposed. In this paper, imposing sparsity on the eigenvectors of graph Laplacian is proposed to attain reasonable approximations of the so-called cluster-indicator-vectors, from which the clusters as well as the cluster number are identified. The proposed algorithm enjoys low computational complexity as it only computes a relevant subset of eigenvectors. It also enjoys better clustering quality than the existing methods, as shown by simulations using nine real datasets.},\n  keywords = {approximation theory;computational complexity;eigenvalues and eigenfunctions;graph theory;pattern clustering;spectral clustering;automatic cluster-number identification;sparse eigenvectors;dataset;pairwise affinity;cluster-indicator-vectors;cluster number;low computational complexity;clustering quality;complex shapes;graph Laplacian;Clustering algorithms;Signal processing algorithms;Laplace equations;Approximation algorithms;Eigenvalues and eigenfunctions;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553168},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436881.pdf},\n}\n\n
\n
\n\n\n
\n Spectral clustering is an empirically successful approach to separating a dataset into some groups with possibly complex shapes based on pairwise affinity. Identifying the number of clusters automatically is still an open issue, although many heuristics have been proposed. In this paper, imposing sparsity on the eigenvectors of graph Laplacian is proposed to attain reasonable approximations of the so-called cluster-indicator-vectors, from which the clusters as well as the cluster number are identified. The proposed algorithm enjoys low computational complexity as it only computes a relevant subset of eigenvectors. It also enjoys better clustering quality than the existing methods, as shown by simulations using nine real datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph representation using mutual information for graph model discrimination.\n \n \n \n \n\n\n \n Hawas, F.; and Djurić, P. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 882-886, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553169,\n  author = {F. Hawas and P. M. Djurić},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph representation using mutual information for graph model discrimination},\n  year = {2018},\n  pages = {882-886},\n  abstract = {We present a novel approach of graph representation based on mutual information of a random walk in a graph. This representation, as any global metric of a graph, can be used to identify the model generator of the observed network. In this study, we use our graph representation combined with Random Forest (RF) to discriminate between Erdos-Renyi (ER), Stochastic Block Model (SBM) and Planted Clique (PC) models. We also combine our graph representation with a Squared Mahalanobis Distance (SMD)-based test to reject a model given an observed network. We test the proposed method with computer simulations.},\n  keywords = {computational complexity;graph theory;learning (artificial intelligence);mathematics computing;random processes;stochastic processes;graph representation;mutual information;graph model discrimination;random walk;random forest;Erdos-Renyi model;stochastic block model;planted clique models;computer simulation;squared Mahalanobis distance-based test;Erbium;Computational modeling;Mutual information;Radio frequency;Testing;Training;Europe;Network Topology;Graph Theory;Complex Networks;Mutual Information},\n  doi = {10.23919/EUSIPCO.2018.8553169},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437509.pdf},\n}\n\n
\n
\n\n\n
\n We present a novel approach of graph representation based on mutual information of a random walk in a graph. This representation, as any global metric of a graph, can be used to identify the model generator of the observed network. In this study, we use our graph representation combined with Random Forest (RF) to discriminate between Erdos-Renyi (ER), Stochastic Block Model (SBM) and Planted Clique (PC) models. We also combine our graph representation with a Squared Mahalanobis Distance (SMD)-based test to reject a model given an observed network. We test the proposed method with computer simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Simultaneous Signal Subspace Rank and Model Selection with an Application to Single-snapshot Source Localization.\n \n \n \n \n\n\n \n Tabassum, M. N.; and Ollila, E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1592-1596, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SimultaneousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553171,\n  author = {M. N. Tabassum and E. Ollila},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Simultaneous Signal Subspace Rank and Model Selection with an Application to Single-snapshot Source Localization},\n  year = {2018},\n  pages = {1592-1596},\n  abstract = {This paper proposes a novel method for model selection in linear regression by utilizing the solution path of l1 regularized least-squares (LS) approach (i.e., Lasso). This method applies the complex-valued least angle regression and shrinkage (c-LARS) algorithm coupled with a generalized information criterion (GIC) and referred to as the c-LARS-GIC method. c-LARS-GIC is a two-stage procedure, where firstly precise values of the regularization parameter, called knots, at which a new predictor variable enters (or leaves) the active set are computed in the Lasso solution path. Active sets provide a nested sequence of regression models and G I C then selects the best model. The sparsity order of the chosen model serves as an estimate of the model order and the LS fit based only on the active set of the model provides an estimate of the regression parameter vector. We then consider a source localization problem, where the aim is to detect the number of impinging source waveforms at a sensor array as well to estimate their direction-of-arrivals (DoA-S ) using only a single-snapshot measurement. We illustrate via simulations that, after formulating the problem as a grid-based sparse signal reconstruction problem, the proposed c-LARS-GIC method detects the number of sources with high probability while at the same time it provides accurate estimates of source locations.},\n  keywords = {direction-of-arrival estimation;least squares approximations;parameter estimation;probability;regression analysis;signal reconstruction;signal sources;vectors;single-snapshot source localization;least-squares approach;generalized information criterion;c-LARS-GIC method;Lasso solution path;source localization problem;single-snapshot measurement;grid-based sparse signal reconstruction problem;simultaneous signal subspace rank and model selection;linear regression model selection;l1 regularized least-square approach;l1 regularized LS approach;complex-valued least angle regression and shrinkage algorithm;direction-of-arrival estimation;DoA-S estimation;probability;regression parameter vector estimation;Computational modeling;Task analysis;Indexes;Signal processing;Signal processing algorithms;Europe;Linear regression},\n  doi = {10.23919/EUSIPCO.2018.8553171},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437366.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a novel method for model selection in linear regression by utilizing the solution path of l1 regularized least-squares (LS) approach (i.e., Lasso). This method applies the complex-valued least angle regression and shrinkage (c-LARS) algorithm coupled with a generalized information criterion (GIC) and referred to as the c-LARS-GIC method. c-LARS-GIC is a two-stage procedure, where firstly precise values of the regularization parameter, called knots, at which a new predictor variable enters (or leaves) the active set are computed in the Lasso solution path. Active sets provide a nested sequence of regression models and G I C then selects the best model. The sparsity order of the chosen model serves as an estimate of the model order and the LS fit based only on the active set of the model provides an estimate of the regression parameter vector. We then consider a source localization problem, where the aim is to detect the number of impinging source waveforms at a sensor array as well to estimate their direction-of-arrivals (DoA-S ) using only a single-snapshot measurement. We illustrate via simulations that, after formulating the problem as a grid-based sparse signal reconstruction problem, the proposed c-LARS-GIC method detects the number of sources with high probability while at the same time it provides accurate estimates of source locations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coding Gain Optimized 8-Point DST with Fast Algorithm for Intra-frames in Video Coding.\n \n \n \n \n\n\n \n Cilasun, M. H.; and Kamlşlı, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 156-160, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CodingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553172,\n  author = {M. H. Cilasun and F. Kamlşlı},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Coding Gain Optimized 8-Point DST with Fast Algorithm for Intra-frames in Video Coding},\n  year = {2018},\n  pages = {156-160},\n  abstract = {In modern video codecs, such as HEVC and VP9, intra-frame blocks are decorrelated using DCT or DST. The optimal separable transform for intra prediction residual blocks has been determined to be a hybrid transform composed of the DCT and the odd type-3 DST (ODST-3), independent of block size. However, the ODST-3 has no fast algorithm like the DCT. Hence its use in HEVC and VP9 has been limited to only 4×4 blocks. For larger blocks such as 8×8 or 16×16, HEVC replaces the optimal ODST-3 with the conventional DCT while VP9 replaces it with the even type-3 DST (EDST-3), both of which have fast algorithms. The EDST-3 has better coding gain than the DCT but it has still a coding gain loss with respect to the optimal ODST-3. This paper attempts to optimize some parameters of the EDST-3 to reduce this coding gain loss while still retaining the fast algorithm. In particular, the 8-point EDST-3 is represented as a cascade of Givens rotations and some rotation angles are optimized to reduce the coding gain loss with respect to the optimal 8-point ODST-3. By replacing only the 8-point EDST-3 in VP9 with this optimized transform, while leaving other transforms with different sizes unchanged, average Bjontegaard-Delta bitrate savings of -0.13% are achieved with respect to the standard VP9 codec.},\n  keywords = {codecs;data compression;discrete cosine transforms;video codecs;video coding;gain optimized 8-point DST;video coding;HEVC;intra-frame blocks;intra prediction residual blocks;odd type-3 DST;optimal ODST-3;conventional DCT;coding gain loss;8-point EDST-3;8-point ODST-3;standard VP9 codec;video codecs;Discrete cosine transforms;Encoding;Image coding;Correlation;Signal processing algorithms;Gain},\n  doi = {10.23919/EUSIPCO.2018.8553172},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438366.pdf},\n}\n\n
\n
\n\n\n
\n In modern video codecs, such as HEVC and VP9, intra-frame blocks are decorrelated using DCT or DST. The optimal separable transform for intra prediction residual blocks has been determined to be a hybrid transform composed of the DCT and the odd type-3 DST (ODST-3), independent of block size. However, the ODST-3 has no fast algorithm like the DCT. Hence its use in HEVC and VP9 has been limited to only 4×4 blocks. For larger blocks such as 8×8 or 16×16, HEVC replaces the optimal ODST-3 with the conventional DCT while VP9 replaces it with the even type-3 DST (EDST-3), both of which have fast algorithms. The EDST-3 has better coding gain than the DCT but it has still a coding gain loss with respect to the optimal ODST-3. This paper attempts to optimize some parameters of the EDST-3 to reduce this coding gain loss while still retaining the fast algorithm. In particular, the 8-point EDST-3 is represented as a cascade of Givens rotations and some rotation angles are optimized to reduce the coding gain loss with respect to the optimal 8-point ODST-3. By replacing only the 8-point EDST-3 in VP9 with this optimized transform, while leaving other transforms with different sizes unchanged, average Bjontegaard-Delta bitrate savings of -0.13% are achieved with respect to the standard VP9 codec.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Extracting Prnu Noise from H.264 Coded Videos.\n \n \n \n \n\n\n \n Altinisik, E.; Tasdemir, K.; and Sencar, H. T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1367-1371, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ExtractingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553173,\n  author = {E. Altinisik and K. Tasdemir and H. T. Sencar},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Extracting Prnu Noise from H.264 Coded Videos},\n  year = {2018},\n  pages = {1367-1371},\n  abstract = {Every device equipped with a digital camera has a unique identity. This phenomenon is essentially due to a systematic noise component of an imaging sensor, known as photo-response non-uniformity (PRNU) noise. An imaging sensor inadvertently introduces this noise pattern to all media captured by that imaging sensor. The procedure for extracting PRNU noise has been well studied in the context of photographic images, however, its extension to video has so far been neglected. In this work, considering H.264 coding standard, we describe a procedure to extract sensor fingerprint from non-stabilized videos. The crux of our method is to remove a filtering procedure applied at the decoder to reduce blockiness and to use macroblocks selectively when estimating PRNU noise pattern. Results show that our method has a potential to improve matching performance significantly.},\n  keywords = {cameras;data compression;fingerprint identification;image denoising;image filtering;image matching;video coding;digital camera;systematic noise component;imaging sensor;photo-response nonuniformity noise;photographic images;sensor fingerprint;nonstabilized videos;PRNU noise pattern;H.264 coded videos;PRNU noise extraction;filtering procedure;Videos;Cameras;Quantization (signal);Encoding;Transform coding;Decoding;Video coding},\n  doi = {10.23919/EUSIPCO.2018.8553173},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439389.pdf},\n}\n\n
\n
\n\n\n
\n Every device equipped with a digital camera has a unique identity. This phenomenon is essentially due to a systematic noise component of an imaging sensor, known as photo-response non-uniformity (PRNU) noise. An imaging sensor inadvertently introduces this noise pattern to all media captured by that imaging sensor. The procedure for extracting PRNU noise has been well studied in the context of photographic images, however, its extension to video has so far been neglected. In this work, considering H.264 coding standard, we describe a procedure to extract sensor fingerprint from non-stabilized videos. The crux of our method is to remove a filtering procedure applied at the decoder to reduce blockiness and to use macroblocks selectively when estimating PRNU noise pattern. Results show that our method has a potential to improve matching performance significantly.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Signal Reconstruction from Sub-sampled and Nonlinearly Distorted Observations.\n \n \n \n \n\n\n \n Marmin, A.; Castella, M.; Pesquet, J.; and Duval, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1970-1974, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SignalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553174,\n  author = {A. Marmin and M. Castella and J. Pesquet and L. Duval},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Signal Reconstruction from Sub-sampled and Nonlinearly Distorted Observations},\n  year = {2018},\n  pages = {1970-1974},\n  abstract = {Faithful short-time acquisition of a sparse signal is still a challenging issue. Instead of an idealized sampling, one has only access to an altered version of it through a measurement system. This paper proposes a reconstruction method for the original sparse signal when the measurement degradation is composed of a nonlinearity, an additive noise, and a sub-sampling scheme. A rational criterion based on a least-squares fitting penalized with a suitable approximation of 10 is minimized using a recent approach guaranteeing global optimality for rational optimization. We provide a complexity analysis and show that the sub-sampling offers a significant gain in terms of computational time. This allows us to tackle practical problems such as chromatography. Finally, experimental results illustrate that our method compares very favorably to existing methods in terms of accuracy in the signal reconstruction.},\n  keywords = {approximation theory;iterative methods;least squares approximations;nonlinear distortion;optimisation;signal reconstruction;signal sampling;measurement system;reconstruction method;additive noise;sub-sampling scheme;rational optimization;computational time;signal reconstruction;nonlinearly distorted observations;sparse signal;short-time acquisition;complexity analysis;chromatography;Complexity theory;Noise measurement;Optimization;Computational modeling;Indexes;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553174},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438045.pdf},\n}\n\n
\n
\n\n\n
\n Faithful short-time acquisition of a sparse signal is still a challenging issue. Instead of an idealized sampling, one has only access to an altered version of it through a measurement system. This paper proposes a reconstruction method for the original sparse signal when the measurement degradation is composed of a nonlinearity, an additive noise, and a sub-sampling scheme. A rational criterion based on a least-squares fitting penalized with a suitable approximation of 10 is minimized using a recent approach guaranteeing global optimality for rational optimization. We provide a complexity analysis and show that the sub-sampling offers a significant gain in terms of computational time. This allows us to tackle practical problems such as chromatography. Finally, experimental results illustrate that our method compares very favorably to existing methods in terms of accuracy in the signal reconstruction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Restoration of High-Dimensional Photon-Starved Images.\n \n \n \n \n\n\n \n Tachella, J.; Altmann, Y.; Pereyra, M.; McLaughlin, S.; and Tourneret, J. -.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 747-751, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553175,\n  author = {J. Tachella and Y. Altmann and M. Pereyra and S. McLaughlin and J. -. Tourneret},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Restoration of High-Dimensional Photon-Starved Images},\n  year = {2018},\n  pages = {747-751},\n  abstract = {This paper investigates different algorithms to perform image restoration from single-photon measurements corrupted with Poisson noise. The restoration problem is formulated in a Bayesian framework and several state-of-the-art Monte Carlo samplers are considered to estimate the unknown image and quantify its uncertainty. The different samplers are compared through a series of experiments conducted with synthetic images. The results demonstrate the scaling properties of the proposed samplers as the dimensionality of the problem increases and the number of photons decreases. Moreover, our experiments show that for a certain photon budget (i.e., acquisition time of the imaging device), downsampling the observations can yield better reconstruction results.},\n  keywords = {Bayes methods;image reconstruction;image restoration;Monte Carlo methods;Monte Carlo samplers;imaging device;photon budget;synthetic images;unknown image;Bayesian framework;restoration problem;Poisson noise;single-photon measurements;image restoration;different algorithms;high-dimensional photon-starved images;Bayesian restoration;Photonics;Bayes methods;Signal processing algorithms;Image restoration;Monte Carlo methods;Noise measurement;Europe;Bayesian statistics;Inverse problems;Image processing;Poisson noise;Markov chain Monte Carlo;Bouncy particle sampler},\n  doi = {10.23919/EUSIPCO.2018.8553175},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437168.pdf},\n}\n\n
\n
\n\n\n
\n This paper investigates different algorithms to perform image restoration from single-photon measurements corrupted with Poisson noise. The restoration problem is formulated in a Bayesian framework and several state-of-the-art Monte Carlo samplers are considered to estimate the unknown image and quantify its uncertainty. The different samplers are compared through a series of experiments conducted with synthetic images. The results demonstrate the scaling properties of the proposed samplers as the dimensionality of the problem increases and the number of photons decreases. Moreover, our experiments show that for a certain photon budget (i.e., acquisition time of the imaging device), downsampling the observations can yield better reconstruction results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of Intra-Pulse Modulation of Radar Signals by Feature Fusion Based Convolutional Neural Networks.\n \n \n \n \n\n\n \n Akyon, F. C.; Alp, Y. K.; Gok, G.; and Arikan, O.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2290-2294, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553176,\n  author = {F. C. Akyon and Y. K. Alp and G. Gok and O. Arikan},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification of Intra-Pulse Modulation of Radar Signals by Feature Fusion Based Convolutional Neural Networks},\n  year = {2018},\n  pages = {2290-2294},\n  abstract = {Detection and classification of radars based on pulses they transmit is an important application in electronic warfare systems. In this work, we propose a novel deep-learning based technique that automatically recognizes intra-pulse modulation types of radar signals. Re-assigned spectrogram of measured radar signal and detected outliers of its instantaneous phases filtered by a special function are used for training multiple convolutional neural networks. Automatically extracted features from the networks are fused to distinguish frequency and phase modulated signals. Simulation results show that the proposed FF-CNN (Feature Fusion based Convolutional Neural Network) technique outperforms the current state-of-the-art alternatives and is easily scalable among broad range of modulation types.},\n  keywords = {electronic warfare;feature extraction;feedforward neural nets;learning (artificial intelligence);phase modulation;pulse modulation;radar signal processing;radar signals;electronic warfare systems;deep-learning based technique;intra-pulse modulation types;measured radar signal;instantaneous phases;training multiple convolutional neural networks;phase modulated signals;re-assigned spectrogram;feature fusion-based convolutional neural network;Frequency modulation;Feature extraction;Signal to noise ratio;Convolutional neural networks;Training;Radar;Phase modulation},\n  doi = {10.23919/EUSIPCO.2018.8553176},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437047.pdf},\n}\n\n
\n
\n\n\n
\n Detection and classification of radars based on pulses they transmit is an important application in electronic warfare systems. In this work, we propose a novel deep-learning based technique that automatically recognizes intra-pulse modulation types of radar signals. Re-assigned spectrogram of measured radar signal and detected outliers of its instantaneous phases filtered by a special function are used for training multiple convolutional neural networks. Automatically extracted features from the networks are fused to distinguish frequency and phase modulated signals. Simulation results show that the proposed FF-CNN (Feature Fusion based Convolutional Neural Network) technique outperforms the current state-of-the-art alternatives and is easily scalable among broad range of modulation types.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Residual Neural Network for EMI Event Classification Using Bispectrum Representations.\n \n \n \n \n\n\n \n Mitiche, I.; David Jenkins, M.; Boreham, P.; Nesbitt, A.; Stewart, B. G.; and Morison, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 186-190, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553177,\n  author = {I. Mitiche and M. {David Jenkins} and P. Boreham and A. Nesbitt and B. G. Stewart and G. Morison},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Residual Neural Network for EMI Event Classification Using Bispectrum Representations},\n  year = {2018},\n  pages = {186-190},\n  abstract = {This paper presents a novel method for condition monitoring of High Voltage (HV) power plant equipment through analysis of discharge signals. These discharge signals are measured using the Electromagnetic Interference (EMI) method and processed using third order Higher-Order Statistics (HOS) to obtain a Bispectrum representation. By mapping the time-domain signal to a Bispectrum image representations the problem can be approached as an image classification task. This allows for the novel application of a Deep Residual Neural Network (ResNet) to the classification of HV discharge signals. The network is trained on signals into 9 classes and achieves high classification accuracy in each category, improving upon our previous work on this task.},\n  keywords = {condition monitoring;electrical engineering computing;electromagnetic interference;fault diagnosis;higher order statistics;image classification;image representation;neural nets;spectral analysis;Deep Residual Neural Network;EMI event classification;Bispectrum representation;condition monitoring;High Voltage power plant equipment;Electromagnetic Interference method;time-domain signal;HV discharge signals;image classification;third order Higher-Order Statistics;Bispectrum image representation;ResNet;Electromagnetic interference;Partial discharges;Discharges (electric);Feature extraction;Fault location;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553177},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437266.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel method for condition monitoring of High Voltage (HV) power plant equipment through analysis of discharge signals. These discharge signals are measured using the Electromagnetic Interference (EMI) method and processed using third order Higher-Order Statistics (HOS) to obtain a Bispectrum representation. By mapping the time-domain signal to a Bispectrum image representations the problem can be approached as an image classification task. This allows for the novel application of a Deep Residual Neural Network (ResNet) to the classification of HV discharge signals. The network is trained on signals into 9 classes and achieves high classification accuracy in each category, improving upon our previous work on this task.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n “What are You Listening to?” Explaining Predictions of Deep Machine Listening Systems.\n \n \n \n\n\n \n Mishra, S.; Sturm, B. L.; and Dixon, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2260-2264, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553178,\n  author = {S. Mishra and B. L. Sturm and S. Dixon},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {“What are You Listening to?” Explaining Predictions of Deep Machine Listening Systems},\n  year = {2018},\n  pages = {2260-2264},\n  abstract = {Researchers have proposed methods to explain neural network predictions by building explanations either in terms of input components (e.g., pixels in an image) or in terms of input regions (e.g., the area containing the face of a Labrador). Such methods aim to determine the trustworthiness of a model, as well as to guide its improvement. In this paper, we argue that explanations in terms of input regions are useful for analysing machine listening systems. We introduce a novel method based on feature inversion to identify a region in an input time-frequency representation that is most influential to a prediction. We demonstrate it for a state-of-the-art singing voice detection model. We evaluate the quality of the generated explanations on two public benchmark datasets. The results demonstrate that the presented method often identifies a region of an input instance that has a decisive effect on the classification.},\n  keywords = {audio signal processing;feature extraction;learning (artificial intelligence);neural nets;voice activity detection;input time-frequency representation;deep machine listening systems;neural network predictions;feature inversion;singing voice detection model;Feature extraction;Task analysis;Training;Visualization;Spectrogram;Neurons;Predictive models;Deep neural networks;visualisation;interpretable machine learning;machine listening},\n  doi = {10.23919/EUSIPCO.2018.8553178},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Researchers have proposed methods to explain neural network predictions by building explanations either in terms of input components (e.g., pixels in an image) or in terms of input regions (e.g., the area containing the face of a Labrador). Such methods aim to determine the trustworthiness of a model, as well as to guide its improvement. In this paper, we argue that explanations in terms of input regions are useful for analysing machine listening systems. We introduce a novel method based on feature inversion to identify a region in an input time-frequency representation that is most influential to a prediction. We demonstrate it for a state-of-the-art singing voice detection model. We evaluate the quality of the generated explanations on two public benchmark datasets. The results demonstrate that the presented method often identifies a region of an input instance that has a decisive effect on the classification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n RepoBIT: Cloud-Driven Real-Time Biosignal Streaming, Storage, Visualisation and Sharing.\n \n \n \n \n\n\n \n Reis, M.; and da Silva , H. P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 772-776, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RepoBIT:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553181,\n  author = {M. Reis and H. P. {da Silva}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {RepoBIT: Cloud-Driven Real-Time Biosignal Streaming, Storage, Visualisation and Sharing},\n  year = {2018},\n  pages = {772-776},\n  abstract = {Physiological (or biosignal) data acquisition and analysis has evolved from being intrinsically bound to medical practice or lab settings, to become a pervasive data source with numerous applications. This has been mostly leveraged by the proliferation of wearables and open hardware platforms. While the first devices offer good cloud connectivity (albeit to proprietary servers) but only provide high-level features, the latter have opposite characteristics, creating a gap in what concerns collaborative access to datasets for reuse in academic research. In this paper we describe a module devised to help bridge this gap, by enabling real-time biosignal streaming directly from an open hardware physiological data acquisition platform to a cloud-based repository over Wi-Fi, Performance tests have been made to assess the data transmission reliability, and the adopted approach to the repository enables user-friendly collaborative access, visualisation and sharing of the recorded data.},\n  keywords = {biomedical communication;cloud computing;data acquisition;data visualisation;Internet of Things;mobile computing;storage management;telemedicine;wireless LAN;biosignal sharing;biosignal visualisation;open hardware physiological data acquisition;biosignal storage;biosignal streaming;biosignal;cloud-based repository;pervasive data source;medical practice;visualisation;RepoBIT;Data acquisition;Cloud computing;Hardware;Physiology;Real-time systems;Biomedical monitoring;Europe;Physiological Computing;Healthcare;Open Hardware;Internet of Things;Cloud-Based Repository},\n  doi = {10.23919/EUSIPCO.2018.8553181},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438271.pdf},\n}\n\n
\n
\n\n\n
\n Physiological (or biosignal) data acquisition and analysis has evolved from being intrinsically bound to medical practice or lab settings, to become a pervasive data source with numerous applications. This has been mostly leveraged by the proliferation of wearables and open hardware platforms. While the first devices offer good cloud connectivity (albeit to proprietary servers) but only provide high-level features, the latter have opposite characteristics, creating a gap in what concerns collaborative access to datasets for reuse in academic research. In this paper we describe a module devised to help bridge this gap, by enabling real-time biosignal streaming directly from an open hardware physiological data acquisition platform to a cloud-based repository over Wi-Fi, Performance tests have been made to assess the data transmission reliability, and the adopted approach to the repository enables user-friendly collaborative access, visualisation and sharing of the recorded data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Direction of Arrival Estimation for Multiple Sound Sources Using Convolutional Recurrent Neural Network.\n \n \n \n \n\n\n \n Adavanne, S.; Politis, A.; and Virtanen, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1462-1466, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DirectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553182,\n  author = {S. Adavanne and A. Politis and T. Virtanen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Direction of Arrival Estimation for Multiple Sound Sources Using Convolutional Recurrent Neural Network},\n  year = {2018},\n  pages = {1462-1466},\n  abstract = {This paper proposes a deep neural network for estimating the directions of arrival (DOA) of multiple sound sources. The proposed stacked convolutional and recurrent neural network (DOAnet) generates a spatial pseudo-spectrum (SPS) along with the DOA estimates in both azimuth and elevation. We avoid any explicit feature extraction step by using the magnitudes and phases of the spectrograms of all the channels as input to the network. The proposed DOAnet is evaluated by estimating the DOAs of multiple concurrently present sources in anechoic, matched and unmatched reverberant conditions. The results show that the proposed DOAnet is capable of estimating the number of sources and their respective DOAs with good precision and generate SPS with high signal-to-noise ratio.},\n  keywords = {array signal processing;direction-of-arrival estimation;feature extraction;feedforward neural nets;recurrent neural nets;signal classification;spatial pseudospectrum;SPS;DOA estimates;explicit feature extraction step;DOAnet;multiple concurrently present sources;anechoic unmatched reverberant conditions;matched unmatched reverberant conditions;arrival estimation;multiple sound sources;convolutional recurrent neural network;deep neural network;Direction-of-arrival estimation;Estimation;Azimuth;Feature extraction;Spectrogram;Multiple signal classification;Two dimensional displays},\n  doi = {10.23919/EUSIPCO.2018.8553182},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434602.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a deep neural network for estimating the directions of arrival (DOA) of multiple sound sources. The proposed stacked convolutional and recurrent neural network (DOAnet) generates a spatial pseudo-spectrum (SPS) along with the DOA estimates in both azimuth and elevation. We avoid any explicit feature extraction step by using the magnitudes and phases of the spectrograms of all the channels as input to the network. The proposed DOAnet is evaluated by estimating the DOAs of multiple concurrently present sources in anechoic, matched and unmatched reverberant conditions. The results show that the proposed DOAnet is capable of estimating the number of sources and their respective DOAs with good precision and generate SPS with high signal-to-noise ratio.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Application-Layer Redundancy for the EVS Codec.\n \n \n \n \n\n\n \n Majed, N.; Ragot, S.; Gros, L.; Lagrange, X.; and Blanc, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2090-2094, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Application-LayerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553183,\n  author = {N. Majed and S. Ragot and L. Gros and X. Lagrange and A. Blanc},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Application-Layer Redundancy for the EVS Codec},\n  year = {2018},\n  pages = {2090-2094},\n  abstract = {In this paper, we study the performance of the 3GPP EVS codec when this codec is used in conjunction with 100% application-layer redundancy. The objective of this work is to investigate potential performance gains for Voice over LTE (VoLTE) in bad coverage scenarios. Voice quality for the EVS codec operated in the 9.6-24.4 kbit/s bit range in super-wideband (SWB) is evaluated at different packet loss rates (PLR), using objective and subjective methods (iTu - T P.863 and P.800 ACR). Results show that EVS at 9.6 kbit/s with 100% application-layer redundancy has significantly higher packet loss resilience in degraded channel conditions (≥ 3 % PLR), for an overall bit rate (around 2×9.6 kbit/s) compatible with VoLTE (assuming a VoLTE bearer configured to a maximum rate of 24.4 kbit/s). We also discuss the relative merit of the partial redundancy mode in the EVS codec at 13.2 kbit/s, known as the channel-aware mode (CAM), and possible RTP/RTCP signaling methods to trigger the use of application-layer redundancy.},\n  keywords = {3G mobile communication;codecs;Long Term Evolution;video coding;3GPP EVS codec;P.800 ACR methods;iTu - T P.863 methods;RTP-RTCP signaling;channel-aware mode;partial redundancy mode;packet loss resilience;VoLTE;voice over LTE;application-layer redundancy;PLR;packet loss rates;bit rate 9.6 kbit/s to 24.4 kbit/s;Redundancy;Codecs;Packet loss;Encoding;Bit rate;Forward error correction},\n  doi = {10.23919/EUSIPCO.2018.8553183},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437225.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we study the performance of the 3GPP EVS codec when this codec is used in conjunction with 100% application-layer redundancy. The objective of this work is to investigate potential performance gains for Voice over LTE (VoLTE) in bad coverage scenarios. Voice quality for the EVS codec operated in the 9.6-24.4 kbit/s bit range in super-wideband (SWB) is evaluated at different packet loss rates (PLR), using objective and subjective methods (iTu - T P.863 and P.800 ACR). Results show that EVS at 9.6 kbit/s with 100% application-layer redundancy has significantly higher packet loss resilience in degraded channel conditions (≥ 3 % PLR), for an overall bit rate (around 2×9.6 kbit/s) compatible with VoLTE (assuming a VoLTE bearer configured to a maximum rate of 24.4 kbit/s). We also discuss the relative merit of the partial redundancy mode in the EVS codec at 13.2 kbit/s, known as the channel-aware mode (CAM), and possible RTP/RTCP signaling methods to trigger the use of application-layer redundancy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Acoustic Scene Classification from Few Examples.\n \n \n \n \n\n\n \n Bocharov, I.; Tjalkens, T.; and De Vries, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 862-866, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AcousticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553184,\n  author = {I. Bocharov and T. Tjalkens and B. {De Vries}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Acoustic Scene Classification from Few Examples},\n  year = {2018},\n  pages = {862-866},\n  abstract = {In order to personalize the behavior of hearing aid devices in different acoustic environments, we need to develop personalized acoustic scene classifiers. Since we cannot afford to burden an individual hearing aid user with the task to collect a large acoustic database, we aim instead to train a scene classifier on just one (or maximally a few) in-situ recorded acoustic waveform of a few seconds duration per scene. In this paper we develop such a “one-shot” personalized scene classifier, based on a Hidden Semi-Markov model. The presented classifier consistently outperforms a more classical Dynamic-Time-Warping-Nearest-Neighbor classifier, and correctly classifies acoustic scenes about twice as well as a (random) chance classifier after training on just one recording of 10 seconds duration per scene.},\n  keywords = {acoustic signal processing;audio signal processing;hearing aids;hidden Markov models;nearest neighbour methods;signal classification;hearing aid devices;acoustic environments;hidden semiMarkov model;classical dynamic-time-warping-nearest-neighbor classifier;acoustic waveform;acoustic database;individual hearing aid user;acoustic scene classification;Acoustics;Task analysis;Hidden Markov models;Databases;Computational modeling;Probabilistic logic;Hearing aids},\n  doi = {10.23919/EUSIPCO.2018.8553184},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436709.pdf},\n}\n\n
\n
\n\n\n
\n In order to personalize the behavior of hearing aid devices in different acoustic environments, we need to develop personalized acoustic scene classifiers. Since we cannot afford to burden an individual hearing aid user with the task to collect a large acoustic database, we aim instead to train a scene classifier on just one (or maximally a few) in-situ recorded acoustic waveform of a few seconds duration per scene. In this paper we develop such a “one-shot” personalized scene classifier, based on a Hidden Semi-Markov model. The presented classifier consistently outperforms a more classical Dynamic-Time-Warping-Nearest-Neighbor classifier, and correctly classifies acoustic scenes about twice as well as a (random) chance classifier after training on just one recording of 10 seconds duration per scene.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Moving Target Classification in Automotive Radar Systems Using Convolutional Recurrent Neural Networks.\n \n \n \n \n\n\n \n Kim, S.; Lee, S.; Doo, S.; and Shim, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1482-1486, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553185,\n  author = {S. Kim and S. Lee and S. Doo and B. Shim},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Moving Target Classification in Automotive Radar Systems Using Convolutional Recurrent Neural Networks},\n  year = {2018},\n  pages = {1482-1486},\n  abstract = {Moving target classification is a key ingredient to avoid accident in autonomous driving systems. Recently, fast chirp frequency modulated continuous wave (FMCW) radar has been popularly used to recognize moving targets due to its ability to discriminate moving objects and stationary clutter. In order to protect vulnerable road users such as pedestrians and cyclists, it is essential to identify road users in a very short period of time. In this paper, we propose a deep neural network that consists of convolutional recurrent units for target classification in automotive radar system. In our experiment, using the real data measured by the fast chirp FMCW-based high range resolution radar, we show that the proposed network is capable of learning the dynamics in time-series image data and outperforms the conventional classification schemes.},\n  keywords = {CW radar;feature extraction;FM radar;frequency modulation;image classification;image motion analysis;learning (artificial intelligence);object detection;radar resolution;recurrent neural nets;road safety;road vehicle radar;traffic engineering computing;target classification;automotive radar system;convolutional recurrent neural networks;autonomous driving systems;vulnerable road users;deep neural network;convolutional recurrent units;fast chirp FMCW-based high range resolution radar;conventional classification schemes;fast chirp frequency modulated continuous wave radar;Radar imaging;Chirp;Convolution;Radar cross-sections;Logic gates;Feature extraction;convolutional neural networks;recurrent neural networks;classification;fast chirp FMCW radar},\n  doi = {10.23919/EUSIPCO.2018.8553185},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438581.pdf},\n}\n\n
\n
\n\n\n
\n Moving target classification is a key ingredient to avoid accident in autonomous driving systems. Recently, fast chirp frequency modulated continuous wave (FMCW) radar has been popularly used to recognize moving targets due to its ability to discriminate moving objects and stationary clutter. In order to protect vulnerable road users such as pedestrians and cyclists, it is essential to identify road users in a very short period of time. In this paper, we propose a deep neural network that consists of convolutional recurrent units for target classification in automotive radar system. In our experiment, using the real data measured by the fast chirp FMCW-based high range resolution radar, we show that the proposed network is capable of learning the dynamics in time-series image data and outperforms the conventional classification schemes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Elastic Neural Networks: A Scalable Framework for Embedded Computer Vision.\n \n \n \n \n\n\n \n Bai, Y.; Bhattacharyya, S. S.; Happonen, A. P.; and Huttunen, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1472-1476, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ElasticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553186,\n  author = {Y. Bai and S. S. Bhattacharyya and A. P. Happonen and H. Huttunen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Elastic Neural Networks: A Scalable Framework for Embedded Computer Vision},\n  year = {2018},\n  pages = {1472-1476},\n  abstract = {We propose a new framework for image classification with deep neural networks. The framework introduces intermediate outputs to the computational graph of a network. This enables flexible control of the computational load and balances the tradeoff between accuracy and execution time. Moreover, we present an interesting finding that the intermediate outputs can act as a regularizer at training time, improving the prediction accuracy. In the experimental section we demonstrate the performance of our proposed framework with various commonly used pretrained deep networks in the use case of apparent age estimation.},\n  keywords = {computer vision;image classification;learning (artificial intelligence);neural nets;deep networks;prediction accuracy;training time;interesting finding;execution time;computational load;flexible control;computational graph;intermediate outputs;deep neural networks;image classification;embedded computer vision;scalable framework;elastic neural networks;Training;Estimation;Neural networks;Convolution;Detectors;Europe;Deep learning;machine learning;regularization;embedded implementations;age estimation},\n  doi = {10.23919/EUSIPCO.2018.8553186},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437155.pdf},\n}\n\n
\n
\n\n\n
\n We propose a new framework for image classification with deep neural networks. The framework introduces intermediate outputs to the computational graph of a network. This enables flexible control of the computational load and balances the tradeoff between accuracy and execution time. Moreover, we present an interesting finding that the intermediate outputs can act as a regularizer at training time, improving the prediction accuracy. In the experimental section we demonstrate the performance of our proposed framework with various commonly used pretrained deep networks in the use case of apparent age estimation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An ADMM Algorithm for Constrained Material Decomposition in Spectral CT.\n \n \n \n \n\n\n \n Hohweiller, T.; Ducros, N.; Peyrin, F.; and Sixou, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 71-75, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553189,\n  author = {T. Hohweiller and N. Ducros and F. Peyrin and B. Sixou},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An ADMM Algorithm for Constrained Material Decomposition in Spectral CT},\n  year = {2018},\n  pages = {71-75},\n  abstract = {Thanks to photon-counting detectors, spectral computerized tomography records energy-resolved data from which the chemical composition of a sample can be recovered. This problem, referred to as material decomposition, can be formulated as a nonlinear inverse problem. In previous work, we proposed to decompose the projection images using a regularized Gauss-Newton algorithm. To reduce further the ill-posedness of the problem, we propose here to consider equality and inequality constraints that are based on physical priors. In particular, we impose the positivity of the solutions as well the total mass in each projection image. In practice, we first decompose the projection images for each projection angle independently. Then, we reconstruct the sample slices from the decomposed projection images using a standard filtered-back projection algorithm. The constrained material decomposition problem is solved by the alternating direction method of multipliers (ADMM). We compare the proposed ADMM algorithm to the unconstrained Gauss-Newton algorithm in a numerical thorax phantom. Including constraints reduces the cross-talk between materials in both the decomposed projections and the reconstructed slices.},\n  keywords = {computerised tomography;image reconstruction;inverse problems;medical image processing;Newton method;phantoms;photon counting;chemical composition;nonlinear inverse problem;projection image;regularized Gauss-Newton algorithm;inequality constraints;projection angle;projection algorithm;constrained material decomposition problem;ADMM algorithm;unconstrained Gauss-Newton algorithm;decomposed projections;photon-counting detectors;Spectral CT;spectral computerized tomography;energy-resolved data;projection image decomposition;Signal processing algorithms;Gadolinium;Photonics;Image reconstruction;Detectors;Europe;Computed tomography;Alternating direction method of multipliers;spectral computed tomography;material decomposition;nonlinear inverse problem},\n  doi = {10.23919/EUSIPCO.2018.8553189},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438133.pdf},\n}\n\n
\n
\n\n\n
\n Thanks to photon-counting detectors, spectral computerized tomography records energy-resolved data from which the chemical composition of a sample can be recovered. This problem, referred to as material decomposition, can be formulated as a nonlinear inverse problem. In previous work, we proposed to decompose the projection images using a regularized Gauss-Newton algorithm. To reduce further the ill-posedness of the problem, we propose here to consider equality and inequality constraints that are based on physical priors. In particular, we impose the positivity of the solutions as well the total mass in each projection image. In practice, we first decompose the projection images for each projection angle independently. Then, we reconstruct the sample slices from the decomposed projection images using a standard filtered-back projection algorithm. The constrained material decomposition problem is solved by the alternating direction method of multipliers (ADMM). We compare the proposed ADMM algorithm to the unconstrained Gauss-Newton algorithm in a numerical thorax phantom. Including constraints reduces the cross-talk between materials in both the decomposed projections and the reconstructed slices.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Nonlinear Prediction of Speech by Echo State Networks.\n \n \n \n \n\n\n \n Zhao, Z.; Liu, H.; and Fingscheidt, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2085-2089, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NonlinearPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553190,\n  author = {Z. Zhao and H. Liu and T. Fingscheidt},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Nonlinear Prediction of Speech by Echo State Networks},\n  year = {2018},\n  pages = {2085-2089},\n  abstract = {Speech prediction plays a key role in many speech signal processing and speech communication methods. While linear prediction of speech is well-studied, nonlinear speech prediction increasingly receives interest especially with the vast amount of new neural network topologies proposed recently. In this paper, nonlinear speech prediction is conducted by a special kind of recurrent neural network not requiring any training beforehand, the echo state network, which adaptively updates its output layer weights. Simulations show its superior performance compared to other well-known prediction approaches in terms of the prediction gain, exceeding all baselines in all conditions by up to 8 dB.},\n  keywords = {recurrent neural nets;speech processing;echo state network;speech communication methods;nonlinear speech prediction;neural network topologies;recurrent neural network;prediction gain;speech signal processing;linear prediction;Reservoirs;Neurons;Topology;Gain;Prediction algorithms;Signal processing;Network topology},\n  doi = {10.23919/EUSIPCO.2018.8553190},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437096.pdf},\n}\n\n
\n
\n\n\n
\n Speech prediction plays a key role in many speech signal processing and speech communication methods. While linear prediction of speech is well-studied, nonlinear speech prediction increasingly receives interest especially with the vast amount of new neural network topologies proposed recently. In this paper, nonlinear speech prediction is conducted by a special kind of recurrent neural network not requiring any training beforehand, the echo state network, which adaptively updates its output layer weights. Simulations show its superior performance compared to other well-known prediction approaches in terms of the prediction gain, exceeding all baselines in all conditions by up to 8 dB.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Data Driven Empirical Iterative Algorithm for GSR Signal Pre-Processing.\n \n \n \n \n\n\n \n Gautam, A.; Simoes-Capela, N.; Schiavone, G.; Acharyya, A.; de Raedt , W.; and Van Hoof, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1162-1166, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553191,\n  author = {A. Gautam and N. Simoes-Capela and G. Schiavone and A. Acharyya and W. {de Raedt} and C. {Van Hoof}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Data Driven Empirical Iterative Algorithm for GSR Signal Pre-Processing},\n  year = {2018},\n  pages = {1162-1166},\n  abstract = {In this paper, we introduce a data driven iterative low pass filtering technique, the Empirical Iterative Algorithm (EIA) for Galvanic Skin Response (GSR) signal preprocessing. This algorithm is inspired on Empirical Mode Decomposition (EMD), with performance enhancements provided by applying Midpoint-based Empirical Decomposition (MED), and removing the sifting process in order to make it computational inexpensive while maintaining effectiveness towards removal of high frequency artefacts. Based on GSR signals recorded at the wrist we present an algorithm benchmark, with results from EIA being compared with a smoothing technique based on moving average filter - commonly used to pre-process GSR signals. The comparison is established on data from 20 subjects, collected while performing 33 different randomized activities with right hand, left hand and both hands, respectively. In average, the proposed algorithm enhances the signal quality by 51%, while the traditional moving average filter reaches 16% enhancement. Also, it performs 136 times faster than the EMD in terms of average computational time. As a show case, using the GSR signal from one subject, we inspect the impact of applying our algorithm on GSR features with psychophysiological relevance. Comparison with no preprocessing and moving average filtering shows the ability of our algorithm to retain relevant low frequency information.},\n  keywords = {feature extraction;iterative methods;low-pass filters;medical signal processing;skin;left hand;signal quality;EMD;average computational time;GSR features;moving average filtering;relevant low frequency information;data driven Empirical Iterative Algorithm;GSR signal pre-processing;low pass filtering technique;Galvanic Skin Response signal preprocessing;Empirical Mode Decomposition;randomized activities;pre-process GSR signals;smoothing technique;EIA;high frequency artefacts;sifting process;Empirical Decomposition;performance enhancements;Signal processing algorithms;Iterative methods;Wrist;Skin;Biomedical monitoring;Filtering;Signal to noise ratio;EMD;Data driven;Iterative algorithm;GSR;EDA;Skin conductance},\n  doi = {10.23919/EUSIPCO.2018.8553191},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439366.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a data driven iterative low pass filtering technique, the Empirical Iterative Algorithm (EIA) for Galvanic Skin Response (GSR) signal preprocessing. This algorithm is inspired on Empirical Mode Decomposition (EMD), with performance enhancements provided by applying Midpoint-based Empirical Decomposition (MED), and removing the sifting process in order to make it computational inexpensive while maintaining effectiveness towards removal of high frequency artefacts. Based on GSR signals recorded at the wrist we present an algorithm benchmark, with results from EIA being compared with a smoothing technique based on moving average filter - commonly used to pre-process GSR signals. The comparison is established on data from 20 subjects, collected while performing 33 different randomized activities with right hand, left hand and both hands, respectively. In average, the proposed algorithm enhances the signal quality by 51%, while the traditional moving average filter reaches 16% enhancement. Also, it performs 136 times faster than the EMD in terms of average computational time. As a show case, using the GSR signal from one subject, we inspect the impact of applying our algorithm on GSR features with psychophysiological relevance. Comparison with no preprocessing and moving average filtering shows the ability of our algorithm to retain relevant low frequency information.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance Analysis of Uplink Massive MIMO System Over Rician Fading Channel.\n \n \n \n \n\n\n \n Kassaw, A.; Hailemariam, D.; and Zoubirl, A. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1272-1276, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553192,\n  author = {A. Kassaw and D. Hailemariam and A. M. Zoubirl},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance Analysis of Uplink Massive MIMO System Over Rician Fading Channel},\n  year = {2018},\n  pages = {1272-1276},\n  abstract = {Massive multiple input multiple output (MIMO) is considered as one of the promising technology to significantly improve the spectral efficiency of fifth generation (5G) networks. In this paper, we analyze the performance of uplink massive MIMO systems over a Rician fading channel and imperfect channel state information (CSI) at a base station (BS). Major Rician fading channel parameters including path-loss, shadowing and multipath fading are considered. Minimum mean square error (MMSE) based channel estimation is done at the BS. Assuming a zero-forcing (ZF) detector, a closed-form expression for the uplink achievable rate is derived and expressed as a function of system and propagation parameters. The impact of the system and propagation parameters on the achievable rate are investigated. Numerical results show that, when the Rician K-factor grows, the uplink achievable sum rate improves. Specifically, when both the number of BS antenna and the Rician K-factor become very large, channel estimation becomes more robust and the interference can be average out and thus, uplink sum rate improves sianificantlv.},\n  keywords = {5G mobile communication;antenna arrays;channel estimation;least mean squares methods;MIMO communication;multipath channels;Rician channels;performance analysis;uplink massive MIMO system;massive multiple input multiple output;fifth generation networks;imperfect channel state information;BS;major Rician fading channel parameters;multipath fading;square error based channel estimation;uplink achievable rate;propagation parameters;Rician K-factor;uplink achievable sum rate;Uplink;Channel estimation;Rician channels;Detectors;Fading channels;Antennas;Massive MIMO;Spectral Efficiency;Zero Forcing Detector},\n  doi = {10.23919/EUSIPCO.2018.8553192},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436633.pdf},\n}\n\n
\n
\n\n\n
\n Massive multiple input multiple output (MIMO) is considered as one of the promising technology to significantly improve the spectral efficiency of fifth generation (5G) networks. In this paper, we analyze the performance of uplink massive MIMO systems over a Rician fading channel and imperfect channel state information (CSI) at a base station (BS). Major Rician fading channel parameters including path-loss, shadowing and multipath fading are considered. Minimum mean square error (MMSE) based channel estimation is done at the BS. Assuming a zero-forcing (ZF) detector, a closed-form expression for the uplink achievable rate is derived and expressed as a function of system and propagation parameters. The impact of the system and propagation parameters on the achievable rate are investigated. Numerical results show that, when the Rician K-factor grows, the uplink achievable sum rate improves. Specifically, when both the number of BS antenna and the Rician K-factor become very large, channel estimation becomes more robust and the interference can be average out and thus, uplink sum rate improves sianificantlv.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modeling the Visual Pathway for Stimulus Optimization in Brain-Computer Interfaces.\n \n \n \n \n\n\n \n Sobreira, F.; Tremmel, C.; and Krusienski, D. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1672-1675, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ModelingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553194,\n  author = {F. Sobreira and C. Tremmel and D. J. Krusienski},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Modeling the Visual Pathway for Stimulus Optimization in Brain-Computer Interfaces},\n  year = {2018},\n  pages = {1672-1675},\n  abstract = {Common brain-computer interface (BCI) paradigms such as P300 Speller and steady-state visual evoked potential (SSVEP)-based interfaces use brain responses to visual stimuli to identify the user's intended target to achieve device control. Many BCI paradigms and decoding approaches do not directly consider the underlying physiology of the sensory pathway and brain responses. By accurately modeling the sensory pathway, it is possible to design new spatial and temporal stimulus patterns to enhance brain response characteristics. This study presents a combined model of the human retina with an artificial neural network (ANN) in order to estimate electroencephalographic (EEG) brain activity trained on actual EEG data. Based on this new visual pathway model, techniques can be developed to create and validate improved stimulus sequences for BCIs and other neurotechnologies.},\n  keywords = {brain;brain-computer interfaces;electroencephalography;eye;neural nets;neurophysiology;visual evoked potentials;brain responses;visual stimuli;device control;BCI paradigms;decoding approaches;sensory pathway;spatial stimulus patterns;temporal stimulus patterns;brain response characteristics;combined model;electroencephalographic brain activity;visual pathway model;improved stimulus sequences;stimulus optimization;brain-computer interface paradigms;Brain modeling;Visualization;Retina;Electroencephalography;Correlation;Electrodes;Physiology},\n  doi = {10.23919/EUSIPCO.2018.8553194},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570430374.pdf},\n}\n\n
\n
\n\n\n
\n Common brain-computer interface (BCI) paradigms such as P300 Speller and steady-state visual evoked potential (SSVEP)-based interfaces use brain responses to visual stimuli to identify the user's intended target to achieve device control. Many BCI paradigms and decoding approaches do not directly consider the underlying physiology of the sensory pathway and brain responses. By accurately modeling the sensory pathway, it is possible to design new spatial and temporal stimulus patterns to enhance brain response characteristics. This study presents a combined model of the human retina with an artificial neural network (ANN) in order to estimate electroencephalographic (EEG) brain activity trained on actual EEG data. Based on this new visual pathway model, techniques can be developed to create and validate improved stimulus sequences for BCIs and other neurotechnologies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Extrapolated Projection Methods for PAPR Reduction.\n \n \n \n \n\n\n \n Fink, J.; Cavalcante, R. L. G.; Jung, P.; and Stanczak, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1810-1814, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ExtrapolatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553195,\n  author = {J. Fink and R. L. G. Cavalcante and P. Jung and S. Stanczak},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Extrapolated Projection Methods for PAPR Reduction},\n  year = {2018},\n  pages = {1810-1814},\n  abstract = {Over more than the last decade, there has been a significant effort in research to reduce the peak-to-average power ratio (PAPR) in orthogonal frequency-division multiplexing (OFDM) systems. This effort has been mainly driven by the need for enhancing the efficiency of power amplifiers. In this paper, we formulate the PAPR reduction problem as a feasibility problem in a real Hilbert space, and provide algorithmic solutions based on extrapolated projection methods with suitably constructed constraint sets. This set-theoretic approach provides a high flexibility and includes various existing PAPR reduction techniques as special cases. In particular, it allows for balancing between the spectral efficiency and signal distortion on a symbol-to-symbol basis, while supporting arbitrary combinations of quadrature amplitude modulation (QAM) constellations. Moreover, we extend the proposed approach to reuse the phase of pilot subcarriers that are simultaneously used for channel estimation. Simulations show remarkable performance gains resulting from extrapolation, which makes it possible to achieve a considerable PAPR reduction in just a few iterations with low computational cost.},\n  keywords = {channel estimation;extrapolation;Hilbert spaces;OFDM modulation;power amplifiers;quadrature amplitude modulation;extrapolated projection methods;peak-to-average power ratio;power amplifiers;PAPR reduction problem;Hilbert space;set-theoretic approach;spectral efficiency;symbol-to-symbol basis;quadrature amplitude modulation constellations;orthogonal frequency-division multiplexing;OFDM;signal distortion;QAM constellations;pilot subcarriers;channel estimation;Peak to average power ratio;Hilbert space;Frequency-domain analysis;Time-domain analysis;Distortion;Quadrature amplitude modulation;Extrapolated projection methods;OFDM;set-theoretic PAPR reduction},\n  doi = {10.23919/EUSIPCO.2018.8553195},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439087.pdf},\n}\n\n
\n
\n\n\n
\n Over more than the last decade, there has been a significant effort in research to reduce the peak-to-average power ratio (PAPR) in orthogonal frequency-division multiplexing (OFDM) systems. This effort has been mainly driven by the need for enhancing the efficiency of power amplifiers. In this paper, we formulate the PAPR reduction problem as a feasibility problem in a real Hilbert space, and provide algorithmic solutions based on extrapolated projection methods with suitably constructed constraint sets. This set-theoretic approach provides a high flexibility and includes various existing PAPR reduction techniques as special cases. In particular, it allows for balancing between the spectral efficiency and signal distortion on a symbol-to-symbol basis, while supporting arbitrary combinations of quadrature amplitude modulation (QAM) constellations. Moreover, we extend the proposed approach to reuse the phase of pilot subcarriers that are simultaneously used for channel estimation. Simulations show remarkable performance gains resulting from extrapolation, which makes it possible to achieve a considerable PAPR reduction in just a few iterations with low computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Machine Learning for User Traffic Classification in Wireless Systems.\n \n \n \n \n\n\n \n Testi, E.; Favarelli, E.; and Giorgetti, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2040-2044, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MachinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553196,\n  author = {E. Testi and E. Favarelli and A. Giorgetti},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Machine Learning for User Traffic Classification in Wireless Systems},\n  year = {2018},\n  pages = {2040-2044},\n  abstract = {The ability to answer all important questions about the radio-frequency (RF) scene is essential for cognitive radios (CRs) to be effective. In this paper, we propose a RF -based automatic traffic recognizer that, observing the radio spectrum emitted by a communication link and exploiting machine learning (ML) techniques, is able to distinguish between two types of data streams. Numerical results based on real waveforms collected by a RF sensor, demonstrate that over-the-air user traffic classification is possible with an accuracy of 97% at high signal-to-noise ratios (SNRs). Moreover, we show that using a neural network (NN) very good classification performance can be achieved also at low SNRs (around 2 dB). Finally, the impact of the observed RF bandwidth and the acquisition time window on the classification accuracy are analyzed in detail.},\n  keywords = {cognitive radio;learning (artificial intelligence);neural nets;radio spectrum management;telecommunication computing;telecommunication traffic;wireless sensor networks;RF sensor;over-the-air user traffic classification;signal-to-noise ratios;observed RF bandwidth;wireless systems;cognitive radios;communication link;data streaming;neural network;SNR;machine learning techniques;radiofrequency scene;CR;RF-based automatic traffic recognizer;radio spectrum emission;ML techniques;NN;time window acquisition;Radio frequency;Neural networks;Signal processing algorithms;Support vector machines;Signal to noise ratio;Feature extraction;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553196},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437051.pdf},\n}\n\n
\n
\n\n\n
\n The ability to answer all important questions about the radio-frequency (RF) scene is essential for cognitive radios (CRs) to be effective. In this paper, we propose a RF -based automatic traffic recognizer that, observing the radio spectrum emitted by a communication link and exploiting machine learning (ML) techniques, is able to distinguish between two types of data streams. Numerical results based on real waveforms collected by a RF sensor, demonstrate that over-the-air user traffic classification is possible with an accuracy of 97% at high signal-to-noise ratios (SNRs). Moreover, we show that using a neural network (NN) very good classification performance can be achieved also at low SNRs (around 2 dB). Finally, the impact of the observed RF bandwidth and the acquisition time window on the classification accuracy are analyzed in detail.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Compressed Encoding Scheme for Approximate Tdoa Estimation.\n \n \n \n \n\n\n \n Vargas, E.; Hopgood, J. R.; Brown, K.; and Subr, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 346-350, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553197,\n  author = {E. Vargas and J. R. Hopgood and K. Brown and K. Subr},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Compressed Encoding Scheme for Approximate Tdoa Estimation},\n  year = {2018},\n  pages = {346-350},\n  abstract = {Accurate estimation of Time-Difference of Arrivals (TDOAs) is necessary to perform accurate sound source localization. The problem has traditionally been solved by using methods such as Generalized Cross-Correlation, which uses the entire signal to accurately estimate TDOAs. However, this could pose a problem in distributed sensor networks in which the amount of data that can be transmitted from each sensor to a fusion center is limited, such as in underwater scenarios or other challenging environments. Inspired by approaches from computer vision, in this paper we identify Scale-Invariant Feature Transform (SIFT) keypoints in the signal spectrogram. We perform crosscorrelation on the signal using only the information available at those extracted keypoints. We test our algorithm in scenarios featuring different noise and reverberation conditions, and using different speech signals and source locations. We show that our algorithm can estimate Time-Difference of Arrivals (TDOAs) and the source location within an acceptable error range at a compression ratio of 40: 1.},\n  keywords = {acoustic radiators;acoustic signal processing;correlation methods;data compression;direction-of-arrival estimation;distributed sensors;feature extraction;reverberation;time-of-arrival estimation;transforms;compressed encoding scheme;accurate sound source localization;distributed sensor networks;fusion center;underwater scenarios;computer vision;signal spectrogram;extracted keypoints;reverberation conditions;source location;compression ratio;TDOA;generalized cross-correlation;scale-invariant feature transform;speech signals;Spectrogram;Reverberation;Microphones;Sensor fusion;Time-frequency analysis;Signal to noise ratio;microphone arrays;time difference estimation;signal compressed encoding},\n  doi = {10.23919/EUSIPCO.2018.8553197},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437125.pdf},\n}\n\n
\n
\n\n\n
\n Accurate estimation of Time-Difference of Arrivals (TDOAs) is necessary to perform accurate sound source localization. The problem has traditionally been solved by using methods such as Generalized Cross-Correlation, which uses the entire signal to accurately estimate TDOAs. However, this could pose a problem in distributed sensor networks in which the amount of data that can be transmitted from each sensor to a fusion center is limited, such as in underwater scenarios or other challenging environments. Inspired by approaches from computer vision, in this paper we identify Scale-Invariant Feature Transform (SIFT) keypoints in the signal spectrogram. We perform crosscorrelation on the signal using only the information available at those extracted keypoints. We test our algorithm in scenarios featuring different noise and reverberation conditions, and using different speech signals and source locations. We show that our algorithm can estimate Time-Difference of Arrivals (TDOAs) and the source location within an acceptable error range at a compression ratio of 40: 1.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Capsule Routing for Sound Event Detection.\n \n \n \n \n\n\n \n Iqbal, T.; Xu, Y.; Kong, Q.; and Wang, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2255-2259, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CapsulePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553198,\n  author = {T. Iqbal and Y. Xu and Q. Kong and W. Wang},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Capsule Routing for Sound Event Detection},\n  year = {2018},\n  pages = {2255-2259},\n  abstract = {The detection of acoustic scenes is a challenging problem in which environmental sound events must be detected from a given audio signal. This includes classifying the events as well as estimating their onset and offset times. We approach this problem with a neural network architecture that uses the recently-proposed capsule routing mechanism. A capsule is a group of activation units representing a set of properties for an entity of interest, and the purpose of routing is to identify part-whole relationships between capsules. That is, a capsule in one layer is assumed to belong to a capsule in the layer above in terms of the entity being represented. Using capsule routing, we wish to train a network that can learn global coherence implicitly, thereby improving generalization performance. Our proposed method is evaluated on Task 4 of the DCASE 2017 challenge. Results show that classification performance is state-of-the-art, achieving an F-score of 58.6%. In addition, overfitting is reduced considerably compared to other architectures.},\n  keywords = {acoustic signal detection;audio signal processing;neural nets;capsule routing;sound event detection;environmental sound events;neural network architecture;audio signal;offset times;onset times;activation units;Routing;Neural networks;Feature extraction;Task analysis;Convolution;Event detection},\n  doi = {10.23919/EUSIPCO.2018.8553198},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437131.pdf},\n}\n\n
\n
\n\n\n
\n The detection of acoustic scenes is a challenging problem in which environmental sound events must be detected from a given audio signal. This includes classifying the events as well as estimating their onset and offset times. We approach this problem with a neural network architecture that uses the recently-proposed capsule routing mechanism. A capsule is a group of activation units representing a set of properties for an entity of interest, and the purpose of routing is to identify part-whole relationships between capsules. That is, a capsule in one layer is assumed to belong to a capsule in the layer above in terms of the entity being represented. Using capsule routing, we wish to train a network that can learn global coherence implicitly, thereby improving generalization performance. Our proposed method is evaluated on Task 4 of the DCASE 2017 challenge. Results show that classification performance is state-of-the-art, achieving an F-score of 58.6%. In addition, overfitting is reduced considerably compared to other architectures.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Image Analysis Based Fish Tail Beat Frequency Estimation for Fishway Efficiency.\n \n \n \n \n\n\n \n Yildirym, Y.; Töreyin, B. U.; Küçükali, S.; Verep, B.; Turan, D.; and Alp, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1790-1794, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImagePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553199,\n  author = {Y. Yildirym and B. U. Töreyin and S. Küçükali and B. Verep and D. Turan and A. Alp},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Image Analysis Based Fish Tail Beat Frequency Estimation for Fishway Efficiency},\n  year = {2018},\n  pages = {1790-1794},\n  abstract = {In this paper, we propose image analysis based methods for estimating fish tail beat frequency, which is an indicator of fish energy consumption at fish passage structures. For this purpose, average magnitude difference and autocorrelation function based periodicity detection techniques are utilized. Actual fish images are acquired using a visible range camera installed in a brush type fish pass in Ikizdere River, near Rize, Turkey, which is very rich in biodiversity. Results show that image analysis based periodicity detection methods can be used for fishway efficiency evaluation purposes. To the best of authors' knowledge, this is the first study that automatically estimates fish tail beat frequency using image analysis. The findings of this study are expected to have implications for fish monitoring and fishway design.},\n  keywords = {aquaculture;cameras;correlation methods;frequency estimation;image processing;rivers;zoology;fish tail beat frequency estimation;fish energy consumption;fish passage structures;brush type fish pass;fish monitoring;fishway design;fish images;fishway efficiency;image analysis based fish tail beat frequency estimation;Videos;Correlation;Fish;Cameras;Rivers;Image analysis;fish detection;fishway;frequency detection;tail beat frequency;environmental monitoring},\n  doi = {10.23919/EUSIPCO.2018.8553199},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437333.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose image analysis based methods for estimating fish tail beat frequency, which is an indicator of fish energy consumption at fish passage structures. For this purpose, average magnitude difference and autocorrelation function based periodicity detection techniques are utilized. Actual fish images are acquired using a visible range camera installed in a brush type fish pass in Ikizdere River, near Rize, Turkey, which is very rich in biodiversity. Results show that image analysis based periodicity detection methods can be used for fishway efficiency evaluation purposes. To the best of authors' knowledge, this is the first study that automatically estimates fish tail beat frequency using image analysis. The findings of this study are expected to have implications for fish monitoring and fishway design.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Riemannian joint dimensionality reduction and dictionary learning on symmetric positive definite manifolds.\n \n \n \n \n\n\n \n Kasai, H.; and Mishra, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2010-2014, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RiemannianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553200,\n  author = {H. Kasai and B. Mishra},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Riemannian joint dimensionality reduction and dictionary learning on symmetric positive definite manifolds},\n  year = {2018},\n  pages = {2010-2014},\n  abstract = {Dictionary leaning (DL) and dimensionality reduction (DR) are powerful tools to analyze high-dimensional noisy signals. This paper presents a proposal of a novel Riemannian joint dimensionality reduction and dictionary learning (R-JDRDL) on symmetric positive definite (SPD) manifolds for classification tasks. The joint learning considers the interaction between dimensionality reduction and dictionary learning procedures by connecting them into a unified framework. We exploit a Riemannian optimization framework for solving DL and DR problems jointly. Finally, we demonstrate that the proposed R-JDRDL outperforms existing state-of-the-arts algorithms when used for image classification tasks.},\n  keywords = {covariance analysis;image classification;learning (artificial intelligence);optimisation;symmetric positive definite manifolds;joint learning;dictionary learning procedures;Riemannian optimization framework;R-JDRDL;dictionary leaning;high-dimensional noisy signals;novel Riemannian joint dimensionality reduction;Manifolds;Dictionaries;Task analysis;Sparse matrices;Measurement;Signal processing algorithms;Optimization;dictionary leaning;dimensionality reduction;SPD matrix;Riemannian manifold},\n  doi = {10.23919/EUSIPCO.2018.8553200},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437476.pdf},\n}\n\n
\n
\n\n\n
\n Dictionary leaning (DL) and dimensionality reduction (DR) are powerful tools to analyze high-dimensional noisy signals. This paper presents a proposal of a novel Riemannian joint dimensionality reduction and dictionary learning (R-JDRDL) on symmetric positive definite (SPD) manifolds for classification tasks. The joint learning considers the interaction between dimensionality reduction and dictionary learning procedures by connecting them into a unified framework. We exploit a Riemannian optimization framework for solving DL and DR problems jointly. Finally, we demonstrate that the proposed R-JDRDL outperforms existing state-of-the-arts algorithms when used for image classification tasks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Unsupervised frame Selection Technique for Robust Emotion Recognition in Noisy Speech.\n \n \n \n \n\n\n \n Pandharipande, M.; Chakraborty, R.; Panda, A.; and Kopparapu, S. K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2055-2059, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553202,\n  author = {M. Pandharipande and R. Chakraborty and A. Panda and S. K. Kopparapu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Unsupervised frame Selection Technique for Robust Emotion Recognition in Noisy Speech},\n  year = {2018},\n  pages = {2055-2059},\n  abstract = {Automatic emotion recognition with good accuracy has been demonstrated for clean speech, but the performance deteriorates quickly when speech is contaminated with noise. In this paper, we propose a front-end voice activity detector (VAD)-based unsupervised method to select the frames with a relatively better signal to noise ratio (SNR) in the spoken utterances. Then we extract a large number of statistical features from low-level audio descriptors for the purpose of emotion recognition by using state-of-art classifiers. Extensive experimentation on two standard databases contaminated with 5 types of noise (Babble, F-16, Factory, Volvo, and HF-channel) from the Noisex-92 noise database at 5 different SNR levels (0, 5, 10, 15, 20dB) have been carried out. While performing all experiments to classify emotions both at the categorical and the dimensional spaces, the proposed technique outperforms a Recurrent Neural Network (RNN)-based VAD across all 5 types and levels of noises, and for both the databases.},\n  keywords = {emotion recognition;recurrent neural nets;signal classification;speech recognition;clean speech;front-end voice activity detector-based unsupervised method;frames;relatively better signal;noise ratio;spoken utterances;low-level audio descriptors;extensive experimentation;standard databases;Noisex-92 noise database;emotions;Recurrent Neural Network-based VAD;unsupervised frame selection technique;robust emotion recognition;noisy speech;automatic emotion recognition;SNR levels;noise figure 20.0 dB;noise figure 0 dB;noise figure 5 dB;noise figure 10 dB;noise figure 15 dB;Emotion recognition;Noise measurement;Speech recognition;Databases;Acoustics;Feature extraction;Signal to noise ratio;Speech emotion;Noisy speech;Voice activity detector;Emotion recognition},\n  doi = {10.23919/EUSIPCO.2018.8553202},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438510.pdf},\n}\n\n
\n
\n\n\n
\n Automatic emotion recognition with good accuracy has been demonstrated for clean speech, but the performance deteriorates quickly when speech is contaminated with noise. In this paper, we propose a front-end voice activity detector (VAD)-based unsupervised method to select the frames with a relatively better signal to noise ratio (SNR) in the spoken utterances. Then we extract a large number of statistical features from low-level audio descriptors for the purpose of emotion recognition by using state-of-art classifiers. Extensive experimentation on two standard databases contaminated with 5 types of noise (Babble, F-16, Factory, Volvo, and HF-channel) from the Noisex-92 noise database at 5 different SNR levels (0, 5, 10, 15, 20dB) have been carried out. While performing all experiments to classify emotions both at the categorical and the dimensional spaces, the proposed technique outperforms a Recurrent Neural Network (RNN)-based VAD across all 5 types and levels of noises, and for both the databases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n The Matched Window Reassignment.\n \n \n \n \n\n\n \n Sandsten, M.; Brynolfsson, J.; and Reinhold, I.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2340-2344, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ThePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553204,\n  author = {M. Sandsten and J. Brynolfsson and I. Reinhold},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {The Matched Window Reassignment},\n  year = {2018},\n  pages = {2340-2344},\n  abstract = {In this paper we calculate reassigned spectrograms using the envelope of an arbitrary signal as matched window. We show that the matched window then give the perfectly localized reassignment for any time-translated and frequency-modulated transient signal with corresponding envelope. The general expressions of the corresponding scaled reassignment vectors are derived and the matched window reassignment is evaluated for time-frequency localization as well as for classification. The results show that the accuracy in time- and frequency location is high also when the signal envelope deviates from the matched window and when the SNR is reasonable large. The classification performance based on the matched window reassignment and the Rényi entropy is robust to signal envelope deviations as well as to disturbing noise.},\n  keywords = {entropy;signal classification;signal envelope;scaled reassignment vectors;time-frequency localization;perfectly localized reassignment;matched window reassignment;Time-frequency analysis;Spectrogram;Signal to noise ratio;Entropy;Microsoft Windows;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553204},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437026.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we calculate reassigned spectrograms using the envelope of an arbitrary signal as matched window. We show that the matched window then give the perfectly localized reassignment for any time-translated and frequency-modulated transient signal with corresponding envelope. The general expressions of the corresponding scaled reassignment vectors are derived and the matched window reassignment is evaluated for time-frequency localization as well as for classification. The results show that the accuracy in time- and frequency location is high also when the signal envelope deviates from the matched window and when the SNR is reasonable large. The classification performance based on the matched window reassignment and the Rényi entropy is robust to signal envelope deviations as well as to disturbing noise.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph-Based Inpainting of Disocclusion Holes for Zooming in 3D Scenes.\n \n \n \n \n\n\n \n Akyazi, P.; and Frossard, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 867-871, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Graph-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553205,\n  author = {P. Akyazi and P. Frossard},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph-Based Inpainting of Disocclusion Holes for Zooming in 3D Scenes},\n  year = {2018},\n  pages = {867-871},\n  abstract = {Color plus depth format allows building 3D representations of scenes within which the users can freely navigate by changing their viewpoints. In this paper we present a framework for view synthesis when the user requests an arbitrary viewpoint that is closer to the 3D scene than the reference image. The requested view constructed via depth-image-based-rendering (DIBR) on the target image plane has missing information due to the expansion of objects and disoccluded areas. Building on our previous work on expansion hole filling, we propose a novel method that adopts a graph-based representation of the target view in order to inpaint the disocclusion holes under sparsity priors. Experimental results indicate that the reconstructed views have PSNR and SSIM quality values that are comparable to those of the state of the art inpainting methods. Visual results show that we are able to preserve details better without introducing blur and reduce artifacts on boundaries between objects on different layers.},\n  keywords = {graph theory;image reconstruction;image representation;image restoration;image texture;rendering (computer graphics);target view;disocclusion holes;reconstructed views;art inpainting methods;graph-based inpainting;color plus depth format;view synthesis;user requests;arbitrary viewpoint;reference image;depth-image-based-rendering;target image plane;disoccluded areas;expansion hole;graph-based representation;Three-dimensional displays;Signal processing;Navigation;Image reconstruction;Dictionaries;Interpolation;Matching pursuit algorithms;Graph signal processing (GSP);depth-image-based-rendering (DIBR);free viewpoint navigation;inpainting},\n  doi = {10.23919/EUSIPCO.2018.8553205},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435683.pdf},\n}\n\n
\n
\n\n\n
\n Color plus depth format allows building 3D representations of scenes within which the users can freely navigate by changing their viewpoints. In this paper we present a framework for view synthesis when the user requests an arbitrary viewpoint that is closer to the 3D scene than the reference image. The requested view constructed via depth-image-based-rendering (DIBR) on the target image plane has missing information due to the expansion of objects and disoccluded areas. Building on our previous work on expansion hole filling, we propose a novel method that adopts a graph-based representation of the target view in order to inpaint the disocclusion holes under sparsity priors. Experimental results indicate that the reconstructed views have PSNR and SSIM quality values that are comparable to those of the state of the art inpainting methods. Visual results show that we are able to preserve details better without introducing blur and reduce artifacts on boundaries between objects on different layers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An acoustic image-source characterisation of surface profiles.\n \n \n \n \n\n\n \n Dawson, P. J.; De Sena, E.; and Naylor, P. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2130-2134, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553206,\n  author = {P. J. Dawson and E. {De Sena} and P. A. Naylor},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An acoustic image-source characterisation of surface profiles},\n  year = {2018},\n  pages = {2130-2134},\n  abstract = {The image-source method models the specular reflection from a plane by means of a secondary source positioned at the source's reflected image. The method has been widely used in acoustics to model the reverberant field of rectangular rooms, but can also be used for general-shaped rooms and non-flat reflectors. This paper explores the relationship between the physical properties of a non-flat reflector and the statistical properties of the associated cloud of image-sources. It is shown here that the standard deviation of the image-sources is strongly correlated with the ratio between depth and width of the reflector's spatial features.},\n  keywords = {acoustic imaging;acoustic radiators;acoustic signal processing;acoustic wave reflection;architectural acoustics;reverberation;source reflected image;reverberant field;statistical properties;standard deviation;surface profiles;acoustic image-source characterisation;nonflat reflector;general-shaped rooms;rectangular rooms;secondary source;specular reflection;image-source method models;Image edge detection;Optical surface waves;Sea surface;Flip chip solder joints;Acoustics;Rough surfaces;Surface roughness},\n  doi = {10.23919/EUSIPCO.2018.8553206},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439432.pdf},\n}\n\n
\n
\n\n\n
\n The image-source method models the specular reflection from a plane by means of a secondary source positioned at the source's reflected image. The method has been widely used in acoustics to model the reverberant field of rectangular rooms, but can also be used for general-shaped rooms and non-flat reflectors. This paper explores the relationship between the physical properties of a non-flat reflector and the statistical properties of the associated cloud of image-sources. It is shown here that the standard deviation of the image-sources is strongly correlated with the ratio between depth and width of the reflector's spatial features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robotic Mobility Diversity Algorithm with Continuous Search Space.\n \n \n \n \n\n\n \n Licea, D. B.; McLernon, D.; Ghogho, M.; Nurellari, E.; and Raza Zaidi, S. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 702-706, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RoboticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553208,\n  author = {D. B. Licea and D. McLernon and M. Ghogho and E. Nurellari and S. A. {Raza Zaidi}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Robotic Mobility Diversity Algorithm with Continuous Search Space},\n  year = {2018},\n  pages = {702-706},\n  abstract = {Small scale fading makes the wireless channel gain vary significantly over small distances and in the context of classical communication systems it can be detrimental to performance. But in the context of mobile robot (MR) wireless communications, we can take advantage of the fading using a mobility diversity algorithm (MDA) to deliberately locate the MR at a point where the channel gain is high. There are two classes of MDAs. In the first class, the MR explores various points, stops at each one to collect channel measurements and then locates the best position to establish communications. In the second class the MR moves, without stopping, along a continuous path while collecting channel measurements and then stops at the end of the path. It determines the best point to establish communications. Until now, the shape of the continuous path for such MDAs has been arbitrarily selected and currently there is no method to optimize it. In this paper, we propose a method to optimize such a path. Simulation results show that such optimized paths provide the MDAs with an increased performance, enabling them to experience higher channel gains while using less mechanical energy for the MR motion.},\n  keywords = {fading channels;mobile robots;wireless channels;mobile robot;channel measurements;continuous path;robotic mobility diversity algorithm;continuous search space;wireless channel gain;Fading channels;Wireless communication;Space exploration;Shape;Interpolation;Correlation;Europe;Rayleigh fading;correlated channels;spatial statistics;diversity;robotics},\n  doi = {10.23919/EUSIPCO.2018.8553208},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436711.pdf},\n}\n\n
\n
\n\n\n
\n Small scale fading makes the wireless channel gain vary significantly over small distances and in the context of classical communication systems it can be detrimental to performance. But in the context of mobile robot (MR) wireless communications, we can take advantage of the fading using a mobility diversity algorithm (MDA) to deliberately locate the MR at a point where the channel gain is high. There are two classes of MDAs. In the first class, the MR explores various points, stops at each one to collect channel measurements and then locates the best position to establish communications. In the second class the MR moves, without stopping, along a continuous path while collecting channel measurements and then stops at the end of the path. It determines the best point to establish communications. Until now, the shape of the continuous path for such MDAs has been arbitrarily selected and currently there is no method to optimize it. In this paper, we propose a method to optimize such a path. Simulation results show that such optimized paths provide the MDAs with an increased performance, enabling them to experience higher channel gains while using less mechanical energy for the MR motion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparsity Based Framework for Spatial Sound Reproduction in Spherical Harmonic Domain.\n \n \n \n \n\n\n \n Routray, G.; and Hegde, R. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2190-2194, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SparsityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553209,\n  author = {G. Routray and R. M. Hegde},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparsity Based Framework for Spatial Sound Reproduction in Spherical Harmonic Domain},\n  year = {2018},\n  pages = {2190-2194},\n  abstract = {In this paper, a novel sparsity based framework is proposed for accurate spatial sound field reproduction in spherical harmonic domain. The proposed framework can effectively reduce the number of loudspeakers required to reproduce the desired sound field using higher order ambisonics (HOA) over a fixed listening area. Although H OA provides accurate reproduction of spatial sound, it has a disadvantage in terms of the restriction on the area of sound reproduction. This area can be increased with the increase in the number of loudspeakers during reproduction. In order to limit the use of a large number of loudspeakers the sparse nature of the weight vector in the HOA signal model is utilized in this work. The problem of obtaining the weight vector is first formulated as a constrained optimization problem which is difficult to solve due to orthogonality property of the spherical harmonic matrix. This problem is therefore reformulated to exploit the sparse nature of the weight vector. The solution is then obtained by using the Bregman iteration method. Experiments on sound field reproduction in free space using the proposed sparsity based method are conducted using loudspeaker arrays. Performance improvements are noted when compared to least squares and compressed sensing methods in terms of sound field reproduction accuracy, subjective, and objective evaluations.},\n  keywords = {acoustic field;acoustic signal processing;iterative methods;loudspeakers;optimisation;sound reproduction;vectors;weight vector;HOA signal model;spherical harmonic matrix;loudspeaker arrays;sound field reproduction accuracy;spatial sound reproduction;spherical harmonic domain;higher order ambisonics;H OA;spatial sound field reproduction;sparsity based framework;orthogonality property;Bregman iteration method;Loudspeakers;Harmonic analysis;Sparse matrices;Rendering (computer graphics);Feeds;Iterative methods;Compressed sensing},\n  doi = {10.23919/EUSIPCO.2018.8553209},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437981.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a novel sparsity based framework is proposed for accurate spatial sound field reproduction in spherical harmonic domain. The proposed framework can effectively reduce the number of loudspeakers required to reproduce the desired sound field using higher order ambisonics (HOA) over a fixed listening area. Although H OA provides accurate reproduction of spatial sound, it has a disadvantage in terms of the restriction on the area of sound reproduction. This area can be increased with the increase in the number of loudspeakers during reproduction. In order to limit the use of a large number of loudspeakers the sparse nature of the weight vector in the HOA signal model is utilized in this work. The problem of obtaining the weight vector is first formulated as a constrained optimization problem which is difficult to solve due to orthogonality property of the spherical harmonic matrix. This problem is therefore reformulated to exploit the sparse nature of the weight vector. The solution is then obtained by using the Bregman iteration method. Experiments on sound field reproduction in free space using the proposed sparsity based method are conducted using loudspeaker arrays. Performance improvements are noted when compared to least squares and compressed sensing methods in terms of sound field reproduction accuracy, subjective, and objective evaluations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Impacts of Viewing Conditions on HDR-VDP2.\n \n \n \n \n\n\n \n Rousselot, M.; Auffret, É.; Ducloux, X.; Meur, O. L.; and Cozot, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1442-1446, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImpactsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553212,\n  author = {M. Rousselot and É. Auffret and X. Ducloux and O. L. Meur and R. Cozot},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Impacts of Viewing Conditions on HDR-VDP2},\n  year = {2018},\n  pages = {1442-1446},\n  abstract = {HDR (High Dynamic Range) and WCG (Wide Color Gamut) increase significantly quality of viewing experience by rendering impressive images and videos. Automatic assessing the quality of these HDR WCG images is one crucial objective in broadcast process. Full-reference HDR metrics have been designed in the last years to achieve this objective: HDR- VDP2, HDR-VQM, PU-encoding metrics. Recent studies have pointed out that HDR-VDP2 is one of the best metric. Unfortunately, HDR-VDP2 is quite complex to use due to numerous and sometimes hard-to-know parameters such as display emission spectrum, surround luminance and angular resolution. In this paper, we show that HDR-VDP2 does not require an accurate knowledge of the viewing condition parameters. For that, we not only test the impact of these parameters on existing image databases of subjective quality scores, but also we propose a new and complementary image database made with a different HDR display.},\n  keywords = {display instrumentation;image colour analysis;rendering (computer graphics);HDR display;wide color gamut;high dynamic range;image rendering;video rendering;HDR WCG imaging;full-reference HDR metric design;display emission spectrum;luminance;HDR-VQM;HDR- VDP2;Measurement;Distortion;Transform coding;Image color analysis;Image databases;Image coding;Image quality;image database;Dynamic range},\n  doi = {10.23919/EUSIPCO.2018.8553212},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437237.pdf},\n}\n\n
\n
\n\n\n
\n HDR (High Dynamic Range) and WCG (Wide Color Gamut) increase significantly quality of viewing experience by rendering impressive images and videos. Automatic assessing the quality of these HDR WCG images is one crucial objective in broadcast process. Full-reference HDR metrics have been designed in the last years to achieve this objective: HDR- VDP2, HDR-VQM, PU-encoding metrics. Recent studies have pointed out that HDR-VDP2 is one of the best metric. Unfortunately, HDR-VDP2 is quite complex to use due to numerous and sometimes hard-to-know parameters such as display emission spectrum, surround luminance and angular resolution. In this paper, we show that HDR-VDP2 does not require an accurate knowledge of the viewing condition parameters. For that, we not only test the impact of these parameters on existing image databases of subjective quality scores, but also we propose a new and complementary image database made with a different HDR display.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables.\n \n \n \n \n\n\n \n Kolosnjaji, B.; Demontis, A.; Biggio, B.; Maiorca, D.; Giacinto, G.; Eckert, C.; and Roli, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 533-537, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AdversarialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553214,\n  author = {B. Kolosnjaji and A. Demontis and B. Biggio and D. Maiorca and G. Giacinto and C. Eckert and F. Roli},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables},\n  year = {2018},\n  pages = {533-537},\n  abstract = {Machine learning has already been exploited as a useful tool for detecting malicious executable files. Data retrieved from malware samples, such as header fields, instruction sequences, or even raw bytes, is leveraged to learn models that discriminate between benign and malicious software. However, it has also been shown that machine learning and deep neural networks can be fooled by evasion attacks (also known as adversarial examples), i.e., small changes to the input data that cause misclassification at test time. In this work, we investigate the vulnerability of malware detection methods that use deep networks to learn from raw bytes. We propose a gradient-based attack that is capable of evading a recently-proposed deep network suited to this purpose by only changing few specific bytes at the end of each mal ware sample, while preserving its intrusive functionality. Promising results show that our adversarial malware binaries evade the targeted network with high probability, even though less than 1 % of their bytes are modified.},\n  keywords = {invasive software;learning (artificial intelligence);neural nets;malware samples;malicious executable files;adversarial malware binaries;gradient-based attack;deep network;malware detection methods;adversarial examples;evasion attacks;deep neural networks;machine learning;malicious software;instruction sequences;Malware;Machine learning;Neural networks;Feature extraction;Signal processing algorithms;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553214},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570440156.pdf},\n}\n
\n
\n\n\n
\n Machine learning has already been exploited as a useful tool for detecting malicious executable files. Data retrieved from malware samples, such as header fields, instruction sequences, or even raw bytes, is leveraged to learn models that discriminate between benign and malicious software. However, it has also been shown that machine learning and deep neural networks can be fooled by evasion attacks (also known as adversarial examples), i.e., small changes to the input data that cause misclassification at test time. In this work, we investigate the vulnerability of malware detection methods that use deep networks to learn from raw bytes. We propose a gradient-based attack that is capable of evading a recently-proposed deep network suited to this purpose by only changing few specific bytes at the end of each mal ware sample, while preserving its intrusive functionality. Promising results show that our adversarial malware binaries evade the targeted network with high probability, even though less than 1 % of their bytes are modified.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance Bounds Analysis for Semi-Blind Channel Estimation with Pilot Contamination in Massive MIMO-OFDM Systems.\n \n \n \n \n\n\n \n Rekik, O.; Ladaycia, A.; Abed-Meraim, K.; and Mokraoui, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1267-1271, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553215,\n  author = {O. Rekik and A. Ladaycia and K. Abed-Meraim and A. Mokraoui},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance Bounds Analysis for Semi-Blind Channel Estimation with Pilot Contamination in Massive MIMO-OFDM Systems},\n  year = {2018},\n  pages = {1267-1271},\n  abstract = {Pilot contamination, in a massive Multiple-Input Multiple-Output (MIMO) system, is an undeniable challenging issue severely affecting the performance of the system by including channel estimation errors. The aim of this paper is to investigate the effectiveness of semi-blind channel estimation approaches in massive MIMO-OFDM (Orthogonal-Frequency Division-Multiplexing) systems. For an estimator-independent study, the performance analysis is carried out using the Cramer Rao Bound (CRB) derivation for pilot-based and semi-blind channel estimation strategies. This analysis demonstrates in particular that: (i) when considering the finite alphabet nature of communication signals, it is possible to efficiently solve the pilot contamination problem with semi-blind channel estimation approach; and (ii) the Second Order Statistics (SOS) only are not sufficient to address the full channel identifiability even if the semi-blind approach is considered.},\n  keywords = {channel estimation;MIMO communication;OFDM modulation;statistical analysis;wireless channels;massive MIMO-OFDM systems;estimator-independent study;semiblind channel estimation strategies;pilot contamination problem;channel identifiability;massive multiple-input multiple-output system;performance bound analysis;semiblind channel estimation error approach;orthogonal-frequency division-multiplexing system;Cramer Rao bound derivation;CRB derivation;pilot-based channel estimation error approach;finite alphabet nature;second order statistics;SOS;Channel estimation;Contamination;MIMO communication;OFDM;Antennas;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553215},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435027.pdf},\n}\n\n
\n
\n\n\n
\n Pilot contamination, in a massive Multiple-Input Multiple-Output (MIMO) system, is an undeniable challenging issue severely affecting the performance of the system by including channel estimation errors. The aim of this paper is to investigate the effectiveness of semi-blind channel estimation approaches in massive MIMO-OFDM (Orthogonal-Frequency Division-Multiplexing) systems. For an estimator-independent study, the performance analysis is carried out using the Cramer Rao Bound (CRB) derivation for pilot-based and semi-blind channel estimation strategies. This analysis demonstrates in particular that: (i) when considering the finite alphabet nature of communication signals, it is possible to efficiently solve the pilot contamination problem with semi-blind channel estimation approach; and (ii) the Second Order Statistics (SOS) only are not sufficient to address the full channel identifiability even if the semi-blind approach is considered.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coverage-Improvement of V2I Communication Through Car-Relays in Microcellular Urban Networks.\n \n \n \n \n\n\n \n Elbal, B. R.; Müller, M. K.; Schwarz, S.; and Rupp, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1522-1526, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Coverage-ImprovementPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553216,\n  author = {B. R. Elbal and M. K. Müller and S. Schwarz and M. Rupp},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Coverage-Improvement of V2I Communication Through Car-Relays in Microcellular Urban Networks},\n  year = {2018},\n  pages = {1522-1526},\n  abstract = {In this paper, we consider a microcellular urban network, where micro-cells provide a wireless connection to vehicular users. We focus on vehicular relaying to enhance the quality of the V2I link and therefore to improve the overall coverage in the network. To quantify the coverage gain, we compare the performance of direct communication between microcell and vehicular user with the relay-aided communication. In the examined scenario, we create a Manhattan grid of streets where in each street micro-cell base stations and vehicular users are randomly placed according to a Poisson point process. The employed pathloss distinguishes whether the transmitter is in the same street as the receiver, in a crossing or in a parallel street. We investigate our model analytically by leveraging tools from stochastic geometry as well as by Monte Carlo system level simulations. We compare results for the relay-assisted link and the direct link depending on the user density in terms of pathloss, SINR and coverage.},\n  keywords = {microcellular radio;relay networks (telecommunication);vehicular ad hoc networks;user density;direct link;relay-assisted link;parallel street;street microcell base stations;relay-aided communication;coverage gain;vehicular relaying;vehicular user;wireless connection;microcellular urban network;car-relays;coverage-improvement;Relays;Transmitters;Interference;Uplink;Receivers;Signal to noise ratio;vehicle-to-vehicle communications;stochastic geometry;system level simulation;coverage improvement},\n  doi = {10.23919/EUSIPCO.2018.8553216},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437317.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider a microcellular urban network, where micro-cells provide a wireless connection to vehicular users. We focus on vehicular relaying to enhance the quality of the V2I link and therefore to improve the overall coverage in the network. To quantify the coverage gain, we compare the performance of direct communication between microcell and vehicular user with the relay-aided communication. In the examined scenario, we create a Manhattan grid of streets where in each street micro-cell base stations and vehicular users are randomly placed according to a Poisson point process. The employed pathloss distinguishes whether the transmitter is in the same street as the receiver, in a crossing or in a parallel street. We investigate our model analytically by leveraging tools from stochastic geometry as well as by Monte Carlo system level simulations. We compare results for the relay-assisted link and the direct link depending on the user density in terms of pathloss, SINR and coverage.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse Autoencoders Using Non-smooth Regularization.\n \n \n \n \n\n\n \n Amini, S.; and Ghaernmaghami, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2000-2004, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553217,\n  author = {S. Amini and S. Ghaernmaghami},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse Autoencoders Using Non-smooth Regularization},\n  year = {2018},\n  pages = {2000-2004},\n  abstract = {Autoencoder, at the heart of a deep learning structure, plays an important role in extracting abstract representation of a set of input training patterns. Abstract representation contains informative features to demonstrate a large set of data patterns in an optimal way in certain applications. It is shown that through sparse regularization of outputs of the hidden units (codes) in an autoencoder, the quality of codes can be enhanced that leads to a higher learning performance in applications like classification. Almost all methods trying to achieve code sparsity in an autoencoder use a smooth approximation of l1 norm, as the best convex approximation of pseudo l0 norm. In this paper, we incorporate sparsity to autoencoder training optimization process using non-smooth convex l1 norm and propose an efficient algorithm to train the structure. The non-smooth l1 regularization have shown its efficiency in imposing sparsity in various applications including feature selection via lasso and sparse representation using basis pursuit. Our experimental results on three benchmark datasets show superiority of this term in training a sparse autoencoder over previously proposed ones. As a byproduct of the proposed method, it can also be used to apply different types of non-smooth regularizers to autoencoder training problem.},\n  keywords = {approximation theory;encoding;feature extraction;image classification;image representation;learning (artificial intelligence);optimisation;sparse matrices;convex approximation;autoencoder training optimization process;sparse representation;sparse autoencoder;nonsmooth regularization;deep learning structure;abstract representation;input training patterns;informative features;data patterns;code sparsity;smooth approximation;feature selection;nonsmooth convex norm;lasso representation;Training;Decoding;Encoding;Cost function;Gradient methods;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553217},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433458.pdf},\n}\n\n
\n
\n\n\n
\n Autoencoder, at the heart of a deep learning structure, plays an important role in extracting abstract representation of a set of input training patterns. Abstract representation contains informative features to demonstrate a large set of data patterns in an optimal way in certain applications. It is shown that through sparse regularization of outputs of the hidden units (codes) in an autoencoder, the quality of codes can be enhanced that leads to a higher learning performance in applications like classification. Almost all methods trying to achieve code sparsity in an autoencoder use a smooth approximation of l1 norm, as the best convex approximation of pseudo l0 norm. In this paper, we incorporate sparsity to autoencoder training optimization process using non-smooth convex l1 norm and propose an efficient algorithm to train the structure. The non-smooth l1 regularization have shown its efficiency in imposing sparsity in various applications including feature selection via lasso and sparse representation using basis pursuit. Our experimental results on three benchmark datasets show superiority of this term in training a sparse autoencoder over previously proposed ones. As a byproduct of the proposed method, it can also be used to apply different types of non-smooth regularizers to autoencoder training problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Low-Rank Channel and Interference Estimation in mm- Wave Massive Antenna Arrays.\n \n \n \n\n\n \n Soatti, G.; Murtada, A.; Nicoli, M.; Gambini, J.; and Spagnolini, U.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 922-926, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553218,\n  author = {G. Soatti and A. Murtada and M. Nicoli and J. Gambini and U. Spagnolini},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Rank Channel and Interference Estimation in mm- Wave Massive Antenna Arrays},\n  year = {2018},\n  pages = {922-926},\n  abstract = {Millimeter wave (mm-Wave) communications are characterized by wideband channels with few directional paths, mostly in line-of-sight. Antenna arrays are mandatory to cope with severe path-loss, and the resulting channel response is sparse in the space-time (ST) domain. This paper addresses the sparsity by proposing a channel estimation method that exploits the algebraic structure of channel and interference, without requiring complex antenna-array calibration procedures. The method relies on the recognition that the ST channel is low-rank and exhibits slowly and fast-varying features (angles/delays of arrival and fading amplitudes, respectively) and, accordingly, that the interference has a slowly-varying spatial covariance with fast-varying amplitudes. The accuracy of the estimation of quasi-stationary components is increased by introducing averaging mechanisms over multiple sequences. Numerical results show that: i) rank-1 is an effective channel-interference representation in mm- Wave setting with severe interference; ii) fundamental limits (derived in closed form) prove the remarkable performance gains in terms of signal-to interference ratio; iii) circular array arrangement with directive elements is preferable compared to square or triangular configurations.},\n  keywords = {calibration;channel estimation;millimetre wave antenna arrays;millimetre wave communication;channel response;channel-interference representation;quasistationary components;fading amplitudes;ST channel;complex antenna-array calibration procedures;algebraic structure;channel estimation method;space-time domain;antenna arrays;directional paths;wideband channels;millimeter wave communications;interference estimation;low-rank channel;Channel estimation;Antenna arrays;Interference;Maximum likelihood estimation;Array signal processing;mm-Wave;space-time channel estimation;subspace methods;antenna array},\n  doi = {10.23919/EUSIPCO.2018.8553218},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Millimeter wave (mm-Wave) communications are characterized by wideband channels with few directional paths, mostly in line-of-sight. Antenna arrays are mandatory to cope with severe path-loss, and the resulting channel response is sparse in the space-time (ST) domain. This paper addresses the sparsity by proposing a channel estimation method that exploits the algebraic structure of channel and interference, without requiring complex antenna-array calibration procedures. The method relies on the recognition that the ST channel is low-rank and exhibits slowly and fast-varying features (angles/delays of arrival and fading amplitudes, respectively) and, accordingly, that the interference has a slowly-varying spatial covariance with fast-varying amplitudes. The accuracy of the estimation of quasi-stationary components is increased by introducing averaging mechanisms over multiple sequences. Numerical results show that: i) rank-1 is an effective channel-interference representation in mm- Wave setting with severe interference; ii) fundamental limits (derived in closed form) prove the remarkable performance gains in terms of signal-to interference ratio; iii) circular array arrangement with directive elements is preferable compared to square or triangular configurations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Structured Dictionary for Regression.\n \n \n \n \n\n\n \n Kumar, K.; Chandra, M. G.; Bapna, A.; and Kumar, A. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1072-1076, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553219,\n  author = {K. Kumar and M. G. Chandra and A. Bapna and A. A. Kumar},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph Structured Dictionary for Regression},\n  year = {2018},\n  pages = {1072-1076},\n  abstract = {Transformations for signals defined on graphs are playing significant role in applying the emerging graph signal processing techniques to different tasks. In this paper we focus on utilizing graph signal dictionary, a data-driven transformation, for regression. Apart from spelling out the joint optimization formulation, as well as the associated iteration steps to arrive at the dictionary and the regression coefficients, the paper provides some initial results to bring out the usefulness of the proposed approach.},\n  keywords = {graph theory;iterative methods;optimisation;regression analysis;signal processing;emerging graph signal;graph signal dictionary;data-driven transformation;joint optimization formulation;associated iteration steps;regression coefficients;Dictionaries;Machine learning;Laplace equations;Task analysis;Optimization;Kernel;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553219},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437921.pdf},\n}\n\n
\n
\n\n\n
\n Transformations for signals defined on graphs are playing significant role in applying the emerging graph signal processing techniques to different tasks. In this paper we focus on utilizing graph signal dictionary, a data-driven transformation, for regression. Apart from spelling out the joint optimization formulation, as well as the associated iteration steps to arrive at the dictionary and the regression coefficients, the paper provides some initial results to bring out the usefulness of the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Poisson Image Denoising Using Best Linear Prediction: a Post-Processing Framework.\n \n \n \n \n\n\n \n Niknejad, M.; and Figueiredo, M. A. T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2230-2234, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PoissonPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553220,\n  author = {M. Niknejad and M. A. T. Figueiredo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Poisson Image Denoising Using Best Linear Prediction: a Post-Processing Framework},\n  year = {2018},\n  pages = {2230-2234},\n  abstract = {In this paper, we address the problem of denoising images degraded by Poisson noise. We propose a new patch-based approach based on best linear prediction to estimate the underlying clean image. A simplified prediction formula is derived for Poisson observations, which requires the covariance matrix of the underlying clean patch. We use the assumption that similar patches in a neighborhood share the same covariance matrix and we use off-the-shelf Poisson denoising methods in order to obtain an initial estimate of these covariance matrices. Our method can be seen as a post-processing step for other Poisson denoising methods and the results show that it improves upon them by relevant margins.},\n  keywords = {covariance matrices;image denoising;stochastic processes;off-the-shelf Poisson denoising methods;underlying clean patch;covariance matrix;Poisson observations;simplified prediction formula;underlying clean image;patch-based approach;Poisson noise;post-processing framework;linear prediction;Poisson image denoising;covariance matrices;Noise reduction;Covariance matrices;Noise measurement;Signal processing algorithms;Europe;Additives;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553220},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436781.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address the problem of denoising images degraded by Poisson noise. We propose a new patch-based approach based on best linear prediction to estimate the underlying clean image. A simplified prediction formula is derived for Poisson observations, which requires the covariance matrix of the underlying clean patch. We use the assumption that similar patches in a neighborhood share the same covariance matrix and we use off-the-shelf Poisson denoising methods in order to obtain an initial estimate of these covariance matrices. Our method can be seen as a post-processing step for other Poisson denoising methods and the results show that it improves upon them by relevant margins.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimized Small Cell Range Expansion in Mobile Communication Networks using Multi-Class Support Vector Machines.\n \n \n \n \n\n\n \n Bahlke, F.; and Pesavento, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 430-434, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OptimizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553221,\n  author = {F. Bahlke and M. Pesavento},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimized Small Cell Range Expansion in Mobile Communication Networks using Multi-Class Support Vector Machines},\n  year = {2018},\n  pages = {430-434},\n  abstract = {Heterogeneous cellular architectures are a promising technology direction for upcoming generations of wireless communication networks. Increasing performance requirements are fulfilled by utilizing a dense deployment of low-power small cells in addition to existing macro cells. In such dense cellular networks it is critical to prevent performance losses from increasing interferences and uneconomic operating costs caused by high power consumptions. These changes in the network architecture create the need for effective control mechanisms specifically designed for heterogeneous networks. Range expansion for small cells has been proposed and extensively researched to achieve load balancing between macro cells and small cells. In this work, we propose a decentralized approach for cell range expansion in small cell networks that in operation only requires very limited local interaction between neighboring cells. We use multiclass support vector machines as a classifier to select suitable parameters for each small cell. Experimental results show that the proposed decentralized approach achieves close to optimal load balancing performance.},\n  keywords = {cellular radio;resource allocation;support vector machines;wireless channels;multiclass support vector machines;optimal load balancing performance;cell range expansion;mobile communication networks;heterogeneous cellular architectures;wireless communication networks;dense cellular networks;power consumptions;optimized small cell range expansion;low-power small cells;Computer architecture;Microprocessors;Resource management;Support vector machines;Wireless communication;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553221},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437850.pdf},\n}\n\n
\n
\n\n\n
\n Heterogeneous cellular architectures are a promising technology direction for upcoming generations of wireless communication networks. Increasing performance requirements are fulfilled by utilizing a dense deployment of low-power small cells in addition to existing macro cells. In such dense cellular networks it is critical to prevent performance losses from increasing interferences and uneconomic operating costs caused by high power consumptions. These changes in the network architecture create the need for effective control mechanisms specifically designed for heterogeneous networks. Range expansion for small cells has been proposed and extensively researched to achieve load balancing between macro cells and small cells. In this work, we propose a decentralized approach for cell range expansion in small cell networks that in operation only requires very limited local interaction between neighboring cells. We use multiclass support vector machines as a classifier to select suitable parameters for each small cell. Experimental results show that the proposed decentralized approach achieves close to optimal load balancing performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Parameters Estimation of Ultrasonics Echoes using the Cuckoo Search and Adaptive Cuckoo Search Algorithms.\n \n \n \n \n\n\n \n Chibane, F.; Benammar, A.; and Drai, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2415-2418, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ParametersPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553222,\n  author = {F. Chibane and A. Benammar and R. Drai},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Parameters Estimation of Ultrasonics Echoes using the Cuckoo Search and Adaptive Cuckoo Search Algorithms},\n  year = {2018},\n  pages = {2415-2418},\n  abstract = {In this study we present a novel approach to estimate ultrasonic echo pattern using the two algorithms: Cuckoo Search (CS) and Adaptive Cuckoo Search (ACS). We model ultrasonic backscattered echoes in terms of superimposed Gaussian echoes corrupted by noise. Each Gaussian echo in the model is a non linear function of a set of parameters: echo bandwidth, arrival time, center frequency, amplitude and phase. The estimation of parameters is formulated as a nonlinear optimisation problem. Simulations are carried out to assess the performance of the proposed algorithms. Finally the algorithms were applied on experimental data for thickness measurement. The CS algorithm converges to best solution with less time than ACS. However, ACS algorithm outperforms CS.},\n  keywords = {acoustic noise;backscatter;echo;Gaussian noise;optimisation;parameter estimation;search problems;thickness measurement;ultrasonic materials testing;ultrasonic scattering;ultrasonic echo pattern;superimposed Gaussian echoes;nonlinear function;CS algorithm;ACS algorithm;Adaptive Cuckoo Search algorithms;ultrasonic backscattered echoes;parameter estimation;noise;echo bandwidth;arrival time;nonlinear optimisation problem;thickness measurement;Signal processing algorithms;Acoustics;Delamination;Parameter estimation;Birds;Optimization;cuckoo search algorithm;echo parameter estimation;ultrasonic signal;thikness measurement},\n  doi = {10.23919/EUSIPCO.2018.8553222},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439314.pdf},\n}\n\n
\n
\n\n\n
\n In this study we present a novel approach to estimate ultrasonic echo pattern using the two algorithms: Cuckoo Search (CS) and Adaptive Cuckoo Search (ACS). We model ultrasonic backscattered echoes in terms of superimposed Gaussian echoes corrupted by noise. Each Gaussian echo in the model is a non linear function of a set of parameters: echo bandwidth, arrival time, center frequency, amplitude and phase. The estimation of parameters is formulated as a nonlinear optimisation problem. Simulations are carried out to assess the performance of the proposed algorithms. Finally the algorithms were applied on experimental data for thickness measurement. The CS algorithm converges to best solution with less time than ACS. However, ACS algorithm outperforms CS.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Superpixel construction for hyperspectral unmixing.\n \n \n \n \n\n\n \n Li, Z.; Chen, J.; and Rahardja, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 647-651, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SuperpixelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553223,\n  author = {Z. Li and J. Chen and S. Rahardja},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Superpixel construction for hyperspectral unmixing},\n  year = {2018},\n  pages = {647-651},\n  abstract = {Spectral unmixing aims to determine the component materials and their associated abundances from mixed pixels in a hyperspectral image. Instead of performing unmixing independently on each pixel, investigating spatial and spectral correlations among pixels can be beneficial to enhance the unmixing performance. However linking pixels across an entire image for such a purpose can be computationally cumbersome and physically unreasonable. In order to address this issue, we propose to construct superpixels for hyperspectral data unmixing. Using an SLIC-based (Simple Linear Iterative Clustering) superpixel constructing process, adjacent pixels are clustered into several blocks with similar spectral signatures. After this preprocessing, unmixing is then performed with a graph-based total variation regularization to benefit from the heterogeneity within each superpixel. Experimental results on synthetic data and real hyperspectral data illustrate advantages of the proposed scheme.},\n  keywords = {graph theory;hyperspectral imaging;image segmentation;iterative methods;spatial correlations;spectral correlations;hyperspectral data unmixing;adjacent pixels;graph-based total variation regularization;superpixel construction;hyperspectral unmixing;simple linear iterative clustering;spectral signatures;SLIC-based superpixel constructing process;Hyperspectral imaging;Signal processing algorithms;Estimation;Europe;Signal processing;Correlation;Hyperspectral images;spectral unmixing;super-pixel analysis;graph regularization},\n  doi = {10.23919/EUSIPCO.2018.8553223},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438996.pdf},\n}\n\n
\n
\n\n\n
\n Spectral unmixing aims to determine the component materials and their associated abundances from mixed pixels in a hyperspectral image. Instead of performing unmixing independently on each pixel, investigating spatial and spectral correlations among pixels can be beneficial to enhance the unmixing performance. However linking pixels across an entire image for such a purpose can be computationally cumbersome and physically unreasonable. In order to address this issue, we propose to construct superpixels for hyperspectral data unmixing. Using an SLIC-based (Simple Linear Iterative Clustering) superpixel constructing process, adjacent pixels are clustered into several blocks with similar spectral signatures. After this preprocessing, unmixing is then performed with a graph-based total variation regularization to benefit from the heterogeneity within each superpixel. Experimental results on synthetic data and real hyperspectral data illustrate advantages of the proposed scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modelling Data with both Sparsity and a Gaussian Random Field: Application to Dark Matter Mass Mapping in Cosmology.\n \n \n \n \n\n\n \n Themelis, K. E.; Lanusse, F.; Jeffrey, N.; Peel, A.; Starck, J.; and Abdalla, F. B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 376-379, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ModellingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553224,\n  author = {K. E. Themelis and F. Lanusse and N. Jeffrey and A. Peel and J. Starck and F. B. Abdalla},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Modelling Data with both Sparsity and a Gaussian Random Field: Application to Dark Matter Mass Mapping in Cosmology},\n  year = {2018},\n  pages = {376-379},\n  abstract = {In this paper we present a novel method for dark matter mass mapping reconstruction from weak gravitational lensing measurements. The crux of the proposed method lies in a new modelling of the matter density field in the Universe as a mixture of two components: a) a sparsity-based component that captures the non-Gaussian structure of the field, such as peaks or halos at different spatial scales; and b) a Gaussian random field, which is known to well represent the linear component of the field. This new model represents the distribution of matter in the universe much better than previously proposed models. We have developed a new algorithm that also takes into account the experimental problems we meet in practice, such as a non-diagonal covariance matrix of the noise or missing data. Experimental results on simulated data show that the proposed method exhibits improved estimation accuracy compared to state-of-the-art methods.},\n  keywords = {cosmology;covariance matrices;dark matter;Gaussian processes;gravitational lenses;modelling data;Gaussian random field;dark matter mass mapping reconstruction;weak gravitational lensing measurements;matter density field;universe;sparsity-based component;nonGaussian structure;linear component;nondiagonal covariance matrix;spatial scales;Convergence;Signal processing algorithms;Covariance matrices;Shape;Optimization;Noise measurement;Data models},\n  doi = {10.23919/EUSIPCO.2018.8553224},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437969.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a novel method for dark matter mass mapping reconstruction from weak gravitational lensing measurements. The crux of the proposed method lies in a new modelling of the matter density field in the Universe as a mixture of two components: a) a sparsity-based component that captures the non-Gaussian structure of the field, such as peaks or halos at different spatial scales; and b) a Gaussian random field, which is known to well represent the linear component of the field. This new model represents the distribution of matter in the universe much better than previously proposed models. We have developed a new algorithm that also takes into account the experimental problems we meet in practice, such as a non-diagonal covariance matrix of the noise or missing data. Experimental results on simulated data show that the proposed method exhibits improved estimation accuracy compared to state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Fusion of Deep Convolutional Generative Adversarial Networks and Sequence to Sequence Autoencoders for Acoustic Scene Classification.\n \n \n \n \n\n\n \n Arniriparian, S.; Freitag, M.; Cummins, N.; Gerczuk, M.; Pugachevskiy, S.; and Schuller, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 977-981, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553225,\n  author = {S. Arniriparian and M. Freitag and N. Cummins and M. Gerczuk and S. Pugachevskiy and B. Schuller},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Fusion of Deep Convolutional Generative Adversarial Networks and Sequence to Sequence Autoencoders for Acoustic Scene Classification},\n  year = {2018},\n  pages = {977-981},\n  abstract = {Unsupervised representation learning shows high promise for generating robust features for acoustic scene analysis. In this regard, we propose and investigate a novel combination of features learnt using both a deep convolutional generative adversarial network (DCGAN) and a recurrent sequence to sequence autoencoder (S2SAE). Each of the representation learning algorithms is trained individually on spectral features extracted from audio instances. The learnt representations are: (i) the activations of the discriminator in case of the DCGAN, and (ii) the activations of a fully connected layer between the decoder and encoder units in case of the S2SAE. We then train two multilayer perceptron neural networks on the DCGAN and S2SAE feature vectors to predict the class labels. The individual predicted labels are combined in a weighted decision-level fusion to achieve the final prediction. The system is evaluated on the development partition of the acoustic scene classification data set of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE 2017). In comparison to the baseline, the accuracy is increased from 74.8 % to 86.4 % using only the DCGAN, to 88.5 % on the development set using only the S2SAE, and to 91.1 % after fusion of the individual predictions.},\n  keywords = {feature extraction;image classification;image representation;learning (artificial intelligence);multilayer perceptrons;pattern classification;signal classification;S2SAE feature vectors;weighted decision-level fusion;acoustic scene classification data;DCGAN;deep convolutional generative adversarial network;sequence autoencoder;unsupervised representation learning;robust features;acoustic scene analysis;recurrent sequence;representation learning algorithms;spectral features;learnt representations;multilayer perceptron neural networks;Acoustics;Generators;Task analysis;Convolution;Training;Spectrogram;Feature extraction;unsupervised feature learning;generative adversarial networks;sequence to sequence autoencoders;acoustic scene classification},\n  doi = {10.23919/EUSIPCO.2018.8553225},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438044.pdf},\n}\n\n
\n
\n\n\n
\n Unsupervised representation learning shows high promise for generating robust features for acoustic scene analysis. In this regard, we propose and investigate a novel combination of features learnt using both a deep convolutional generative adversarial network (DCGAN) and a recurrent sequence to sequence autoencoder (S2SAE). Each of the representation learning algorithms is trained individually on spectral features extracted from audio instances. The learnt representations are: (i) the activations of the discriminator in case of the DCGAN, and (ii) the activations of a fully connected layer between the decoder and encoder units in case of the S2SAE. We then train two multilayer perceptron neural networks on the DCGAN and S2SAE feature vectors to predict the class labels. The individual predicted labels are combined in a weighted decision-level fusion to achieve the final prediction. The system is evaluated on the development partition of the acoustic scene classification data set of the IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events (DCASE 2017). In comparison to the baseline, the accuracy is increased from 74.8 % to 86.4 % using only the DCGAN, to 88.5 % on the development set using only the S2SAE, and to 91.1 % after fusion of the individual predictions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fundamental Limits for Joint Relative Position and Orientation Estimation with Generic Antennas.\n \n \n \n \n\n\n \n Pohlmann, R.; Zhang, S.; Dammann, A.; and Hoeher, P. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 692-696, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FundamentalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553226,\n  author = {R. Pohlmann and S. Zhang and A. Dammann and P. A. Hoeher},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fundamental Limits for Joint Relative Position and Orientation Estimation with Generic Antennas},\n  year = {2018},\n  pages = {692-696},\n  abstract = {Automated multi-agent robotic systems are a promising technology for extra-terrestrial exploration. Autonomous control of such systems requires position as well as orientation awareness. Having a multi-port antenna system like an antenna array or a multi-mode antenna (MMA) installed on each agent, position and orientation can be estimated when the agents cooperatively communicate via radio signals. The aim of this paper is to derive the fundamental limits for relative, i.e. anchor-free, joint position and orientation estimation. Using wavefield modeling and manifold separation, the inclusion of calibration data from real multi-port antenna systems is possible. The Cramer-Rae bound (CRB) is derived directly on the received signal, thus representing a fundamental limit. We then use the derived bound to analyze the position and orientation estimation capabilities of a multi-agent system employing MMAs.},\n  keywords = {antenna arrays;calibration;mobile robots;multi-agent systems;multi-robot systems;position control;fundamental limit;joint relative position;generic antennas;automated multiagent robotic systems;extra-terrestrial exploration;orientation awareness;antenna array;multimode antenna;Cramer-Rae bound;wavefield modeling;manifold separation;calibration data;real multiport antenna system;radio signals;Estimation;Antenna arrays;OFDM;Europe;Signal processing;Global navigation satellite system},\n  doi = {10.23919/EUSIPCO.2018.8553226},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436201.pdf},\n}\n\n
\n
\n\n\n
\n Automated multi-agent robotic systems are a promising technology for extra-terrestrial exploration. Autonomous control of such systems requires position as well as orientation awareness. Having a multi-port antenna system like an antenna array or a multi-mode antenna (MMA) installed on each agent, position and orientation can be estimated when the agents cooperatively communicate via radio signals. The aim of this paper is to derive the fundamental limits for relative, i.e. anchor-free, joint position and orientation estimation. Using wavefield modeling and manifold separation, the inclusion of calibration data from real multi-port antenna systems is possible. The Cramer-Rae bound (CRB) is derived directly on the received signal, thus representing a fundamental limit. We then use the derived bound to analyze the position and orientation estimation capabilities of a multi-agent system employing MMAs.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Uncertainty Quantification in Imaging: When Convex Optimization Meets Bayesian Analysis.\n \n \n \n\n\n \n Repetti, A.; Pereyra, M.; and Wiaux, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2668-2672, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553227,\n  author = {A. Repetti and M. Pereyra and Y. Wiaux},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Uncertainty Quantification in Imaging: When Convex Optimization Meets Bayesian Analysis},\n  year = {2018},\n  pages = {2668-2672},\n  abstract = {We propose to perform Bayesian uncertainty quantification via convex optimization tools (BUQO), in the context of high dimensional inverse problems. We quantify the uncertainty associated with particular structures appearing in the maximum a posteriori estimate, obtained from a log-concave Bayesian model. A hypothesis test is defined, where the null hypothesis represents the non-existence of the structure of interest in the true image. To determine if this null hypothesis is rejected, we use the data and prior knowledge. Computing such test in the context of imaging problem is often intractable due to the high dimensionality involved. In this work, we propose to leverage probability concentration phenomena and the underlying convex geometry to formulate the Bayesian hypothesis test as a convex minimization problem. This problem is subsequently solved using a proximal primal-dual algorithm. The proposed method is applied to astronomical radio-interferometric imaging.},\n  keywords = {astronomical image processing;Bayes methods;convex programming;geometry;inverse problems;minimisation;probability;posteriori estimate;log-concave Bayesian model;null hypothesis;imaging problem;high dimensionality;leverage probability concentration phenomena;Bayesian hypothesis test;convex minimization problem;astronomical radio-interferometric imaging;Bayesian analysis;Bayesian uncertainty quantification;convex optimization tools;convex geometry;high dimensional inverse problems;BUQO;Uncertainty;Bayes methods;Imaging;Signal processing algorithms;Inverse problems;Minimization;Europe;Bayesian uncertainty quantification;hypothesis testing;astronomical imaging;inverse problem;proximal primal-dual algorithm},\n  doi = {10.23919/EUSIPCO.2018.8553227},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We propose to perform Bayesian uncertainty quantification via convex optimization tools (BUQO), in the context of high dimensional inverse problems. We quantify the uncertainty associated with particular structures appearing in the maximum a posteriori estimate, obtained from a log-concave Bayesian model. A hypothesis test is defined, where the null hypothesis represents the non-existence of the structure of interest in the true image. To determine if this null hypothesis is rejected, we use the data and prior knowledge. Computing such test in the context of imaging problem is often intractable due to the high dimensionality involved. In this work, we propose to leverage probability concentration phenomena and the underlying convex geometry to formulate the Bayesian hypothesis test as a convex minimization problem. This problem is subsequently solved using a proximal primal-dual algorithm. The proposed method is applied to astronomical radio-interferometric imaging.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Octonion Spectrum of 3D Octonion- Valued Signals - Properties and Possible Applications.\n \n \n \n\n\n \n Błaszczyk, Ł.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 509-513, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553228,\n  author = {Ł. Błaszczyk},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Octonion Spectrum of 3D Octonion- Valued Signals - Properties and Possible Applications},\n  year = {2018},\n  pages = {509-513},\n  abstract = {The aim of this paper is to investigate the properties of octonion Fourier transform (OFT) of octonion-valued functions of three variables and its potential applications in signal processing. This work has been inspired by the original papers by Hahn and Snopek concerning octonion Fourier transform definition and its applications in the analysis of the hypercomplex analytic signals. First, the generalization of the OFT definition to the octonion-valued functions is provided, and then the octonion analogues of classical Fourier transform properties are derived, e.g. argument scaling, modulation and shift theorem. Finally, the results are illustrated with some examples that indicate possible applications.},\n  keywords = {Fourier transforms;signal processing;octonion spectrum;3d octonion;octonion Fourier transform definition;argument scaling;modulation;shift theorem;classical Fourier transform properties;octonion analogues;OFT definition;hypercomplex analytic signals;signal processing;octonion-valued functions;Fourier transforms;Algebra;Quaternions;Modulation;Europe;Three-dimensional displays},\n  doi = {10.23919/EUSIPCO.2018.8553228},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The aim of this paper is to investigate the properties of octonion Fourier transform (OFT) of octonion-valued functions of three variables and its potential applications in signal processing. This work has been inspired by the original papers by Hahn and Snopek concerning octonion Fourier transform definition and its applications in the analysis of the hypercomplex analytic signals. First, the generalization of the OFT definition to the octonion-valued functions is provided, and then the octonion analogues of classical Fourier transform properties are derived, e.g. argument scaling, modulation and shift theorem. Finally, the results are illustrated with some examples that indicate possible applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs.\n \n \n \n \n\n\n \n Ye, C.; Shafipour, R.; and Mateos, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 121-125, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553229,\n  author = {C. Ye and R. Shafipour and G. Mateos},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Blind Identification of Invertible Graph Filters with Multiple Sparse Inputs},\n  year = {2018},\n  pages = {121-125},\n  abstract = {This paper deals with the problem of blind identification of a graph filter and its sparse input signal, thus broadening the scope of classical blind deconvolution of temporal and spatial signals to irregular graph domains. While the observations are bilinear functions of the unknowns, a mild requirement on invertibility of the filter enables an efficient convex formulation, without relying on matrix lifting that can hinder applicability to large graphs. On top of scaling, it is argued that (non-cyclic) permutation ambiguities may arise with some particular graphs. Deterministic sufficient conditions under which the proposed convex relaxation can exactly recover the unknowns are stated, along with those guaranteeing identifiability under the Bernoulli-Gaussian model for the inputs. Numerical tests with synthetic and real-world networks illustrate the merits of the proposed algorithm, as well as the benefits of leveraging multiple signals to aid the (blind) localization of sources of diffusion.},\n  keywords = {blind source separation;convex programming;deconvolution;filtering theory;Gaussian processes;graph theory;matrix algebra;multiple sparse inputs;blind identification;graph filter;sparse input signal;temporal signals;spatial signals;bilinear functions;convex relaxation;invertible graph filters;blind deconvolution;convex formulation;Bernoulli-Gaussian model;Signal processing;Deconvolution;Mathematical model;Signal processing algorithms;Sparse matrices;Minimization;Graph signal processing;network diffusion;bilinear equations;blind deconvolution;convex optimization},\n  doi = {10.23919/EUSIPCO.2018.8553229},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439437.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the problem of blind identification of a graph filter and its sparse input signal, thus broadening the scope of classical blind deconvolution of temporal and spatial signals to irregular graph domains. While the observations are bilinear functions of the unknowns, a mild requirement on invertibility of the filter enables an efficient convex formulation, without relying on matrix lifting that can hinder applicability to large graphs. On top of scaling, it is argued that (non-cyclic) permutation ambiguities may arise with some particular graphs. Deterministic sufficient conditions under which the proposed convex relaxation can exactly recover the unknowns are stated, along with those guaranteeing identifiability under the Bernoulli-Gaussian model for the inputs. Numerical tests with synthetic and real-world networks illustrate the merits of the proposed algorithm, as well as the benefits of leveraging multiple signals to aid the (blind) localization of sources of diffusion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Shot Single Sensor Light Field Camera Using a Color Coded Mask.\n \n \n \n \n\n\n \n Miandji, E.; Unger, J.; and Guillemot, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 226-230, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-ShotPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553230,\n  author = {E. Miandji and J. Unger and C. Guillemot},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-Shot Single Sensor Light Field Camera Using a Color Coded Mask},\n  year = {2018},\n  pages = {226-230},\n  abstract = {We present a compressed sensing framework for reconstructing the full light field of a scene captured using a single-sensor consumer camera. To achieve this, we use a color coded mask in front of the camera sensor. To further enhance the reconstruction quality, we propose to utilize multiple shots by moving the mask or the sensor randomly. The compressed sensing framework relies on a training based dictionary over a light field data set. Numerical simulations show significant improvements in reconstruction quality over a similar coded aperture system for light field capture.},\n  keywords = {cameras;compressed sensing;image capture;image coding;image colour analysis;image reconstruction;image sensors;multishot single sensor light field camera;color coded mask;compressed sensing framework;single-sensor consumer camera;light field capture;light field dataset;coded aperture system;quality reconstruction;Apertures;Cameras;Sensors;Image color analysis;Dictionaries;Image reconstruction;Reconstruction algorithms;light field camera;coded aperture},\n  doi = {10.23919/EUSIPCO.2018.8553230},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439722.pdf},\n}\n\n
\n
\n\n\n
\n We present a compressed sensing framework for reconstructing the full light field of a scene captured using a single-sensor consumer camera. To achieve this, we use a color coded mask in front of the camera sensor. To further enhance the reconstruction quality, we propose to utilize multiple shots by moving the mask or the sensor randomly. The compressed sensing framework relies on a training based dictionary over a light field data set. Numerical simulations show significant improvements in reconstruction quality over a similar coded aperture system for light field capture.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ultrasonic Based Proximity Detection for Handsets.\n \n \n \n \n\n\n \n Parada, P. P.; and Saeidi, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2419-2423, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"UltrasonicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553231,\n  author = {P. P. Parada and R. Saeidi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Ultrasonic Based Proximity Detection for Handsets},\n  year = {2018},\n  pages = {2419-2423},\n  abstract = {A novel approach for proximity detection on mobile handsets which does not require any additional transducers is presented. The method is based on transmitting a chirp and processing the received signal by applying Least Mean Square (LMS), where the desired signal is the transmitted chirp. The envelope of three signals (estimated filter taps, estimated output and error signal) are characterized with a set of 12 features which are used to classify a given frame into one of two classes: proximity active or proximity inactive. The classifier employed is based on Support Vector Machine (SVM) with linear kernel. The results show that over 13 minutes of recorded data, the accuracy achieved is 95.28% using 10-fold cross-validation. Furthermore, the feature importance analysis performed on the database indicates that the most relevant feature is based on the estimated filter taps.},\n  keywords = {filtering theory;least mean squares methods;mobile handsets;signal classification;support vector machines;ultrasonic based proximity detection;mobile handsets;Least Mean Square;transmitted chirp;estimated filter taps;error signal;classifier;feature importance analysis;support vector machine;estimated output signal;Ultrasonic imaging;Feature extraction;Chirp;Microphones;Telephone sets;Acoustics;Support vector machines},\n  doi = {10.23919/EUSIPCO.2018.8553231},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570430518.pdf},\n}\n\n
\n
\n\n\n
\n A novel approach for proximity detection on mobile handsets which does not require any additional transducers is presented. The method is based on transmitting a chirp and processing the received signal by applying Least Mean Square (LMS), where the desired signal is the transmitted chirp. The envelope of three signals (estimated filter taps, estimated output and error signal) are characterized with a set of 12 features which are used to classify a given frame into one of two classes: proximity active or proximity inactive. The classifier employed is based on Support Vector Machine (SVM) with linear kernel. The results show that over 13 minutes of recorded data, the accuracy achieved is 95.28% using 10-fold cross-validation. Furthermore, the feature importance analysis performed on the database indicates that the most relevant feature is based on the estimated filter taps.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Putting the PRNU Model in Reverse Gear: Findings with Synthetic Signals.\n \n \n \n \n\n\n \n Mascicpinto, M.; and Perez-Gonzalez, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1352-1356, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PuttingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553232,\n  author = {M. Mascicpinto and F. Perez-Gonzalez},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Putting the PRNU Model in Reverse Gear: Findings with Synthetic Signals},\n  year = {2018},\n  pages = {1352-1356},\n  abstract = {The prevalent model for the photoresponse nonuniformity (PRNU) of digital cameras is (negatively) tested. The PRNU serves as a fingerprint of the underlying device that has proven its usefulness in image and video forensics. When such model is applied to derive analytical expressions characterizing the PRNU detection statistic, the predictions do not conform with reality. However, those expressions are thoroughly validated here through extensive experimentation using synthetic signals with parameters taken from real images. As a consequence, the current multiplicative PRNU model must be revised. This opens new venues for performance improvements in both the extraction and detection of device fingerprints.},\n  keywords = {cameras;feature extraction;fingerprint identification;gears;image forensics;image sensors;statistical analysis;video signal processing;reverse gear;synthetic signals;prevalent model;photoresponse nonuniformity;digital cameras;fingerprint;video forensics;PRNU detection statistic;extensive experimentation;current multiplicative PRNU model;device fingerprints;image forensics;Correlation;Cameras;Noise reduction;Mathematical model;Image sensors;Detectors},\n  doi = {10.23919/EUSIPCO.2018.8553232},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436100.pdf},\n}\n\n
\n
\n\n\n
\n The prevalent model for the photoresponse nonuniformity (PRNU) of digital cameras is (negatively) tested. The PRNU serves as a fingerprint of the underlying device that has proven its usefulness in image and video forensics. When such model is applied to derive analytical expressions characterizing the PRNU detection statistic, the predictions do not conform with reality. However, those expressions are thoroughly validated here through extensive experimentation using synthetic signals with parameters taken from real images. As a consequence, the current multiplicative PRNU model must be revised. This opens new venues for performance improvements in both the extraction and detection of device fingerprints.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Physical Layer Communications System Design Over-the-Air Using Adversarial Networks.\n \n \n \n \n\n\n \n O'Shea, T. J.; Roy, T.; West, N.; and Hilburn, B. C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 529-532, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PhysicalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553233,\n  author = {T. J. O'Shea and T. Roy and N. West and B. C. Hilburn},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Physical Layer Communications System Design Over-the-Air Using Adversarial Networks},\n  year = {2018},\n  pages = {529-532},\n  abstract = {This paper presents a novel method for synthesizing new physical layer modulation and coding schemes for communications systems using a learning-based approach which does not require an analytic model of the impairments in the channel. It extends prior work published on the channel autoencoderto consider the case where the stochastic channel response is not known or can not be easily modeled in a closed form analytic expression. By adopting an adversarial approach for learning a channel response approximation and information encoding, we jointly learn a solution to both tasks applicable over a wide range of channel environments. We describe the operation of the proposed adversarial system, share results for its training and validation over-the-air, and discuss implications and future work in the area.},\n  keywords = {encoding;learning (artificial intelligence);queueing theory;radio networks;stochastic processes;telecommunication computing;telecommunication network topology;telecommunication security;wireless channels;validation over-the-air;training;adversarial system;channel environments;information encoding;channel response approximation;closed form analytic expression;stochastic channel response;analytic model;learning-based approach;communications systems;coding schemes;physical layer modulation;adversarial networks;physical layer communications system design over-the-air;Training;Communication systems;Encoding;Decoding;Receivers;Gallium nitride;Modulation;machine learning;deep learning;neural networks;autoencoders;generative adversarial networks;modulation;software radio},\n  doi = {10.23919/EUSIPCO.2018.8553233},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439360.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a novel method for synthesizing new physical layer modulation and coding schemes for communications systems using a learning-based approach which does not require an analytic model of the impairments in the channel. It extends prior work published on the channel autoencoderto consider the case where the stochastic channel response is not known or can not be easily modeled in a closed form analytic expression. By adopting an adversarial approach for learning a channel response approximation and information encoding, we jointly learn a solution to both tasks applicable over a wide range of channel environments. We describe the operation of the proposed adversarial system, share results for its training and validation over-the-air, and discuss implications and future work in the area.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Arbitrary Length Perfect Integer Sequences Using Geometric Series.\n \n \n \n \n\n\n \n Pei, S.; and Chang, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1257-1261, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ArbitraryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553234,\n  author = {S. Pei and K. Chang},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Arbitrary Length Perfect Integer Sequences Using Geometric Series},\n  year = {2018},\n  pages = {1257-1261},\n  abstract = {A novel method to construct perfect integer sequences based on geometric series is proposed. The method can be applied to arbitrary signal length. A closed form construction has been derived for a given ratio. Moreover, perfect Gaussian integer sequences can also be constructed by this method. The idea can be further generalized to obtain other perfect integer sequences from a given one by the Extended Euclidean algorithm. To the authors' knowledge, these sequences cannot be found by any previous work. Concrete examples are illustrated.},\n  keywords = {correlation methods;geometry;sequences;series (mathematics);extended Euclidean algorithm;perfect Gaussian integer sequences;closed form construction;arbitrary signal length;geometric series;arbitrary length perfect integer sequences;Discrete Fourier transforms;Correlation;Mathematical model;Europe;Signal processing;Indexes;Multiaccess communication;Discrete Fourier transform;geometric series;zero autocorrelation;perfect integer sequences},\n  doi = {10.23919/EUSIPCO.2018.8553234},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434251.pdf},\n}\n\n
\n
\n\n\n
\n A novel method to construct perfect integer sequences based on geometric series is proposed. The method can be applied to arbitrary signal length. A closed form construction has been derived for a given ratio. Moreover, perfect Gaussian integer sequences can also be constructed by this method. The idea can be further generalized to obtain other perfect integer sequences from a given one by the Extended Euclidean algorithm. To the authors' knowledge, these sequences cannot be found by any previous work. Concrete examples are illustrated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n CycleGAN-VC: Non-parallel Voice Conversion Using Cycle-Consistent Adversarial Networks.\n \n \n \n \n\n\n \n Kaneko, T.; and Kameoka, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2100-2104, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CycleGAN-VC:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553236,\n  author = {T. Kaneko and H. Kameoka},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {CycleGAN-VC: Non-parallel Voice Conversion Using Cycle-Consistent Adversarial Networks},\n  year = {2018},\n  pages = {2100-2104},\n  abstract = {We propose a non-parallel voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is particularly noteworthy in that it is general purpose and high quality and works without any extra data, modules, or alignment procedure. Our method, called CycleGAN-VC, uses a cycle-consistent adversarial network (CycleGAN) with gated convolutional neural networks (CNNs) and an identity-mapping loss. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from non-parallel data. Furthermore, the adversarial loss can bring the converted speech close to the target one on the basis of indistinguishability without explicit density estimation. This allows to avoid over-smoothing caused by statistical averaging, which occurs in many conventional statistical model-based VC methods that represent data distribution explicitly. We configure a CycleGAN with gated CNNs and train it with an identity-mapping loss. This allows the mapping function to capture sequential and hierarchical structures while preserving linguistic information. We evaluated our method on a non-parallel VC task. An objective evaluation showed that the converted feature sequence was near natural in terms of global variance and modulation spectra, which are structural indicators highly correlated with subjective evaluation. A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture model-based parallel VC method even though CycleGAN-VC is trained under disadvantageous conditions (non-parallel and half the amount of data).},\n  keywords = {Gaussian processes;learning (artificial intelligence);mixture models;neural nets;speech processing;statistical analysis;conventional statistical model-based VC methods;data distribution;gated CNNs;identity-mapping loss;mapping function;VC task;subjective evaluation;converted speech;Gaussian mixture model-based parallel VC method;voice conversion;cycle-consistent adversarial network;voice-conversion method;parallel data;gated convolutional neural networks;adversarial cycle-consistency losses;nonparallel data;adversarial loss;CycleGAN-VC method;Logic gates;Task analysis;Training;Generators;Data models;Linguistics;Modulation;voice conversion;non-parallel conversion;generative adversarial networks;CycleGAN;gated CNN},\n  doi = {10.23919/EUSIPCO.2018.8553236},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438014.pdf},\n}\n\n
\n
\n\n\n
\n We propose a non-parallel voice-conversion (VC) method that can learn a mapping from source to target speech without relying on parallel data. The proposed method is particularly noteworthy in that it is general purpose and high quality and works without any extra data, modules, or alignment procedure. Our method, called CycleGAN-VC, uses a cycle-consistent adversarial network (CycleGAN) with gated convolutional neural networks (CNNs) and an identity-mapping loss. A CycleGAN learns forward and inverse mappings simultaneously using adversarial and cycle-consistency losses. This makes it possible to find an optimal pseudo pair from non-parallel data. Furthermore, the adversarial loss can bring the converted speech close to the target one on the basis of indistinguishability without explicit density estimation. This allows to avoid over-smoothing caused by statistical averaging, which occurs in many conventional statistical model-based VC methods that represent data distribution explicitly. We configure a CycleGAN with gated CNNs and train it with an identity-mapping loss. This allows the mapping function to capture sequential and hierarchical structures while preserving linguistic information. We evaluated our method on a non-parallel VC task. An objective evaluation showed that the converted feature sequence was near natural in terms of global variance and modulation spectra, which are structural indicators highly correlated with subjective evaluation. A subjective evaluation showed that the quality of the converted speech was comparable to that obtained with a Gaussian mixture model-based parallel VC method even though CycleGAN-VC is trained under disadvantageous conditions (non-parallel and half the amount of data).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computationally Efficient Image Super Resolution from Totally Aliased Low Resolution Images.\n \n \n \n \n\n\n \n Kumar, A. A.; Narendra, N.; Balamuralidhar, P.; and Chandra, M. G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2245-2249, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ComputationallyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553237,\n  author = {A. A. Kumar and N. Narendra and P. Balamuralidhar and M. G. Chandra},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Computationally Efficient Image Super Resolution from Totally Aliased Low Resolution Images},\n  year = {2018},\n  pages = {2245-2249},\n  abstract = {This paper considers the problem of super-resolution (SR) image reconstruction from a set of totally aliased low resolution (LR) images with different unknown sub-pixel offsets. By assuming the translational motion model, a linear compact representation between the LR image spectrums and SR image spectrum, based on multi-coset sampling is provided. Based on this model, we formulate the joint estimation of the unknown shifts and SR image spectrum as a dictionary learning problem and alternating minimization approach is employed to solve this joint estimation. Two different approaches for obtaining the SR image; one based on estimated shifts and another based on estimate SR spectrum are described. The significant advantage of the proposed approach is the smaller matrix sizes to be handled during the computation; typically on the order of number of images and enhancement factors, and is completely independent on the actual dimensions of LR and SR images, hence requiring significantly lesser resources than the current state of the art approaches. Brief simulation results are also provided to demonstrate the efficacy of this approach.},\n  keywords = {image enhancement;image reconstruction;image resolution;matrix algebra;minimisation;signal sampling;SR image spectrum;dictionary learning problem;super-resolution image reconstruction;translational motion model;LR image spectrums;low resolution images;pixel offsets;linear compact representation;minimization approach;matrix sizes;multicoset sampling;unknown shifts estimation;enhancement factors;Image reconstruction;Estimation;Image resolution;Image restoration;Signal resolution;Frequency-domain analysis;Signal processing algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553237},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437724.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers the problem of super-resolution (SR) image reconstruction from a set of totally aliased low resolution (LR) images with different unknown sub-pixel offsets. By assuming the translational motion model, a linear compact representation between the LR image spectrums and SR image spectrum, based on multi-coset sampling is provided. Based on this model, we formulate the joint estimation of the unknown shifts and SR image spectrum as a dictionary learning problem and alternating minimization approach is employed to solve this joint estimation. Two different approaches for obtaining the SR image; one based on estimated shifts and another based on estimate SR spectrum are described. The significant advantage of the proposed approach is the smaller matrix sizes to be handled during the computation; typically on the order of number of images and enhancement factors, and is completely independent on the actual dimensions of LR and SR images, hence requiring significantly lesser resources than the current state of the art approaches. Brief simulation results are also provided to demonstrate the efficacy of this approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Heuristics for Efficient Sparse Blind Source Separation.\n \n \n \n \n\n\n \n Kervazo, C.; Bobin, J.; and Chenot, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 489-493, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"HeuristicsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553238,\n  author = {C. Kervazo and J. Bobin and C. Chenot},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Heuristics for Efficient Sparse Blind Source Separation},\n  year = {2018},\n  pages = {489-493},\n  abstract = {Sparse Blind Source Separation (sparse BSS) is a key method to analyze multichannel data in fields ranging from medical imaging to astrophysics. However, since it relies on seeking the solution of a non-convex penalized matrix factorization problem, its performances largely depend on the optimization strategy. In this context, Proximal Alternating Linearized Minimization (PALM) has become a standard algorithm which, despite its theoretical grounding, generally provides poor practical separation results. In this work, we first investigate the origins of these limitations, which are shown to take their roots in the sensitivity to both the initialization and the regularization parameter choice. As an alternative, we propose a novel strategy that combines a heuristic approach with PALM. We show its relevance on realistic astrophysical data.},\n  keywords = {blind source separation;matrix decomposition;minimisation;sparse BSS;Sparse Blind Source Separation;realistic astrophysical data;Proximal Alternating Linearized Minimization;optimization strategy;nonconvex penalized matrix factorization problem;medical imaging;multichannel data;heuristic approach;Signal processing algorithms;Sparse matrices;Optimization;Standards;Robustness;Blind source separation;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553238},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437220.pdf},\n}\n\n
\n
\n\n\n
\n Sparse Blind Source Separation (sparse BSS) is a key method to analyze multichannel data in fields ranging from medical imaging to astrophysics. However, since it relies on seeking the solution of a non-convex penalized matrix factorization problem, its performances largely depend on the optimization strategy. In this context, Proximal Alternating Linearized Minimization (PALM) has become a standard algorithm which, despite its theoretical grounding, generally provides poor practical separation results. In this work, we first investigate the origins of these limitations, which are shown to take their roots in the sensitivity to both the initialization and the regularization parameter choice. As an alternative, we propose a novel strategy that combines a heuristic approach with PALM. We show its relevance on realistic astrophysical data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Novel Algorithm for Incremental L1-Norm Principal-Component Analysis.\n \n \n \n \n\n\n \n Dhanaraj, M.; and Markopoulos, P. P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2020-2024, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NovelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553239,\n  author = {M. Dhanaraj and P. P. Markopoulos},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Novel Algorithm for Incremental L1-Norm Principal-Component Analysis},\n  year = {2018},\n  pages = {2020-2024},\n  abstract = {L1-norm Principal-Component Analysis (L1-PCA) has been shown to exhibit sturdy resistance against outliers among the processed data. In this work, we propose L1-IPCA: an algorithm for incremental L1-PCA, appropriate for big-data and streaming-data applications. The proposed algorithm updates the calculated L1-norm principal components as new data points arrive, conducting a sequence of computationally efficient bit-flipping iterations. Our experimental studies on subspace estimation, image conditioning, and video foreground extraction illustrate that the proposed algorithm attains remarkable outlier resistance at low computational cost.},\n  keywords = {Big Data;image processing;principal component analysis;video signal processing;data applications;L1-norm principal-component analysis;bit-flipping iterations;subspace estimation;image conditioning;video foreground extraction;big-data;Signal processing algorithms;Principal component analysis;Approximation algorithms;Reliability;Resistance;Measurement;Signal processing;Image/video processing;incremental PCA;L1-norm PCA;outliers;online learning},\n  doi = {10.23919/EUSIPCO.2018.8553239},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439387.pdf},\n}\n\n
\n
\n\n\n
\n L1-norm Principal-Component Analysis (L1-PCA) has been shown to exhibit sturdy resistance against outliers among the processed data. In this work, we propose L1-IPCA: an algorithm for incremental L1-PCA, appropriate for big-data and streaming-data applications. The proposed algorithm updates the calculated L1-norm principal components as new data points arrive, conducting a sequence of computationally efficient bit-flipping iterations. Our experimental studies on subspace estimation, image conditioning, and video foreground extraction illustrate that the proposed algorithm attains remarkable outlier resistance at low computational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cross-Layer Optimization in Terminals.\n \n \n \n \n\n\n \n Frascolla, V.; Ah Sue, J.; Mudussir Ayub, M.; Miesniak, K.; Hasholzner, R.; Englisch, J.; and Ben-Ameur, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 802-806, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Cross-LayerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553240,\n  author = {V. Frascolla and J. {Ah Sue} and M. {Mudussir Ayub} and K. Miesniak and R. Hasholzner and J. Englisch and A. Ben-Ameur},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Cross-Layer Optimization in Terminals},\n  year = {2018},\n  pages = {802-806},\n  abstract = {The innovation pace of the wireless communication world is breathtaking, not only due to the fierce competition, but also due to the yearly cadence with which standards bodies deliver a new set of functionalities and services to be supported. In this very dynamic context, optimizing products and differentiate against the competitors is key for all those who want to be successful in a make-or-break market. This paper therefore describes some key cross-layer optimization techniques of mobile phones, focusing on cellular protocol stack access stratum enhancements, power optimizations to memory system and finally cross-layer impact on tools for SW development.},\n  keywords = {3G mobile communication;cellular radio;mobile computing;mobile handsets;optimisation;protocols;retail data processing;terminals;innovation pace;wireless communication world;fierce competition;yearly cadence;standards bodies;dynamic context;key cross-layer optimization techniques;cellular protocol stack access stratum enhancements;power optimizations;cross-layer impact;make-or-break market;Cross-Iayer optimization;Cellular protocol stack;Memory subsystem;Power optimization;Power modeling;Tooling},\n  doi = {10.23919/EUSIPCO.2018.8553240},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436494.pdf},\n}\n\n
\n
\n\n\n
\n The innovation pace of the wireless communication world is breathtaking, not only due to the fierce competition, but also due to the yearly cadence with which standards bodies deliver a new set of functionalities and services to be supported. In this very dynamic context, optimizing products and differentiate against the competitors is key for all those who want to be successful in a make-or-break market. This paper therefore describes some key cross-layer optimization techniques of mobile phones, focusing on cellular protocol stack access stratum enhancements, power optimizations to memory system and finally cross-layer impact on tools for SW development.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Frequency-Domain Band-MMSE Equalizer for Continuous Phase Modulation over Frequency-Selective Time-Varying Channels.\n \n \n \n \n\n\n \n Chayot, R.; Thomas, N.; Poulliat, C.; Boucheret, M. -.; Lesthievent, G.; and Van Wambeke, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1287-1291, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553241,\n  author = {R. Chayot and N. Thomas and C. Poulliat and M. -. Boucheret and G. Lesthievent and N. {Van Wambeke}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Frequency-Domain Band-MMSE Equalizer for Continuous Phase Modulation over Frequency-Selective Time-Varying Channels},\n  year = {2018},\n  pages = {1287-1291},\n  abstract = {In this paper, we consider single carrier continuous phase modulations (CPM) over frequency selective time-varying channels. In this context, we propose a new low-complexity frequency-domain equalizer based on the minimum mean square error (MMSE) criterion exploiting efficiently the band structure of the associated channel matrix in the frequency domain. Simulations show that this band-MMSE equalizer exhibits a good performance complexity trade-off compared to existing solutions.},\n  keywords = {equalisers;frequency-domain analysis;least mean squares methods;mean square error methods;phase modulation;time-varying channels;frequency domain band-MMSE equalizer;band structure;associated channel matrix;minimum mean square error criterion;low-complexity frequency-domain equalizer;frequency selective time-varying channels;single carrier continuous phase modulations;frequency-selective time-varying channels;continuous phase modulation;frequency-domain band-MMSE equalizer;Equalizers;Complexity theory;Frequency-domain analysis;TV;Correlation;Matrices;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553241},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437312.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we consider single carrier continuous phase modulations (CPM) over frequency selective time-varying channels. In this context, we propose a new low-complexity frequency-domain equalizer based on the minimum mean square error (MMSE) criterion exploiting efficiently the band structure of the associated channel matrix in the frequency domain. Simulations show that this band-MMSE equalizer exhibits a good performance complexity trade-off compared to existing solutions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Trust-Region Minimization Algorithm for Training Responses (TRMinATR): The Rise of Machine Learning Techniques.\n \n \n \n \n\n\n \n Rafati, J.; DeGuchy, O.; and Marcia, R. F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2015-2019, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Trust-RegionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553243,\n  author = {J. Rafati and O. DeGuchy and R. F. Marcia},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Trust-Region Minimization Algorithm for Training Responses (TRMinATR): The Rise of Machine Learning Techniques},\n  year = {2018},\n  pages = {2015-2019},\n  abstract = {Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.},\n  keywords = {approximation theory;computational complexity;gradient methods;Hessian matrices;learning (artificial intelligence);matrix inversion;Newton method;optimisation;storage management;gradient descent methods;Hessian approximations;trust-region minimization algorithm;training responses;nonconvex functions;computational complexity;memory storage;machine learning;deep learning;TRMinATR;Hessian matrix inversion;limited memory BFGS quasiNewton method;Neural networks;Training;Signal processing algorithms;Computer architecture;Optimization;Europe;Quasi-Newton methods;Limited-memory BFGS;Trust-region methods;Line-search methods;Deep learning},\n  doi = {10.23919/EUSIPCO.2018.8553243},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438819.pdf},\n}\n\n
\n
\n\n\n
\n Deep learning is a highly effective machine learning technique for large-scale problems. The optimization of nonconvex functions in deep learning literature is typically restricted to the class of first-order algorithms. These methods rely on gradient information because of the computational complexity associated with the second derivative Hessian matrix inversion and the memory storage required in large scale data problems. The reward for using second derivative information is that the methods can result in improved convergence properties for problems typically found in a non-convex setting such as saddle points and local minima. In this paper we introduce TRMinATR - an algorithm based on the limited memory BFGS quasi-Newton method using trust region - as an alternative to gradient descent methods. TRMinATR bridges the disparity between first order methods and second order methods by continuing to use gradient information to calculate Hessian approximations. We provide empirical results on the classification task of the MNIST dataset and show robust convergence with preferred generalization characteristics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Hierarchical Latent Mixture Model for Polyphonic Music Analysis.\n \n \n \n \n\n\n \n O'Brien, C.; and Plumbley, M. D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1910-1914, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553244,\n  author = {C. O'Brien and M. D. Plumbley},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Hierarchical Latent Mixture Model for Polyphonic Music Analysis},\n  year = {2018},\n  pages = {1910-1914},\n  abstract = {Polyphonic music transcription is a challenging problem, requiring the identification of a collection of latent pitches which can explain an observed music signal. Many state-of-the-art methods are based on the Non-negative Matrix Factorization (NMF) framework, which itself can be cast as a latent variable model. However, the basic NMF algorithm fails to consider many important aspects of music signals such as low-rank or hierarchical structure and temporal continuity. In this work we propose a probabilistic model to address some of the shortcomings of NMF. Probabilistic Latent Component Analysis (PLCA) provides a probabilistic interpretation of NMF and has been widely applied to problems in audio signal processing. Based on PLCA, we propose an algorithm which represents signals using a collection of low-rank dictionaries built from a base pitch dictionary. This allows each dictionary to specialize to a given chord or interval template which will be used to represent collections of similar frames. Experiments on a standard music transcription data set show that our method can successfully decompose signals into a hierarchical and smooth structure, improving the quality of the transcription.},\n  keywords = {audio signal processing;matrix decomposition;music;probability;polyphonic music transcription;latent pitches;latent variable model;basic NMF algorithm;music signals;hierarchical structure;temporal continuity;PLCA;audio signal processing;low-rank dictionaries;base pitch dictionary;standard music transcription data;smooth structure;polyphonic music analysis;nonnegative matrix factorization framework;probabilistic latent component analysis;hierarchical latent mixture model;Multiple signal classification;Dictionaries;Hidden Markov models;Time-frequency analysis;Signal processing algorithms;Signal processing;Music},\n  doi = {10.23919/EUSIPCO.2018.8553244},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437846.pdf},\n}\n\n
\n
\n\n\n
\n Polyphonic music transcription is a challenging problem, requiring the identification of a collection of latent pitches which can explain an observed music signal. Many state-of-the-art methods are based on the Non-negative Matrix Factorization (NMF) framework, which itself can be cast as a latent variable model. However, the basic NMF algorithm fails to consider many important aspects of music signals such as low-rank or hierarchical structure and temporal continuity. In this work we propose a probabilistic model to address some of the shortcomings of NMF. Probabilistic Latent Component Analysis (PLCA) provides a probabilistic interpretation of NMF and has been widely applied to problems in audio signal processing. Based on PLCA, we propose an algorithm which represents signals using a collection of low-rank dictionaries built from a base pitch dictionary. This allows each dictionary to specialize to a given chord or interval template which will be used to represent collections of similar frames. Experiments on a standard music transcription data set show that our method can successfully decompose signals into a hierarchical and smooth structure, improving the quality of the transcription.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n From L1 Minimization to Entropy Minimization: A Novel Approach for Sparse Signal Recovery in Compressive Sensing.\n \n \n \n \n\n\n \n Conde, M. H.; and Loffeld, O.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 568-572, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FromPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553245,\n  author = {M. H. Conde and O. Loffeld},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {From L1 Minimization to Entropy Minimization: A Novel Approach for Sparse Signal Recovery in Compressive Sensing},\n  year = {2018},\n  pages = {568-572},\n  abstract = {The groundbreaking theory of compressive sensing (CS) enables reconstructing many common classes or real-world signals from a number of samples that is well below that prescribed by the Shannon sampling theorem, which exclusively relates to the bandwidth of the signal. Differently, CS takes profit of the sparsity or compressibility of the signals in an appropriate basis to reconstruct them from few measurements. A large number of algorithms exist for solving the sparse recovery problem, which can be roughly classified in greedy pursuits and l1 minimization algorithms. Chambolle and Pock's (C&P) primal-dual l1minimization algorithm has shown to deliver state-of-the-art results with optimal convergence rate. In this work we present an algorithm for l1 minimization that operates in the null space of the measurement matrix and follows a Nesterov-accelerated gradient descent structure. Restriction to the null space allows the algorithm to operate in a minimal-dimension subspace. A further novelty lies on the fact that the cost function is no longer the l1 norm of the temporal solution, but a weighted sum of its entropy and its l1 norm. The inclusion of the entropy pushes the l1 minimization towards a de facto quasi-10 minimization, while the l1 norm term avoids divergence. Our algorithm globally outperforms C&P and other recent approaches for l1 minimization in terms of l2reconstruction error, including a different entropy-based method.},\n  keywords = {compressed sensing;convergence of numerical methods;entropy;gradient methods;greedy algorithms;matrix algebra;minimisation;signal reconstruction;signal sampling;entropy minimization;sparse signal recovery;compressive sensing;CS;sparse recovery problem;minimal-dimension subspace;Shannon sampling theorem;Chambolle and Pock's primal-dual L1 minimization algorithm;C and P;optimal convergence rate;Nesterov-accelerated gradient descent structure;greedy pursuits;Minimization;Entropy;Signal processing algorithms;Kalman filters;Null space;Sensors;Matching pursuit algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553245},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437258.pdf},\n}\n\n
\n
\n\n\n
\n The groundbreaking theory of compressive sensing (CS) enables reconstructing many common classes or real-world signals from a number of samples that is well below that prescribed by the Shannon sampling theorem, which exclusively relates to the bandwidth of the signal. Differently, CS takes profit of the sparsity or compressibility of the signals in an appropriate basis to reconstruct them from few measurements. A large number of algorithms exist for solving the sparse recovery problem, which can be roughly classified in greedy pursuits and l1 minimization algorithms. Chambolle and Pock's (C&P) primal-dual l1minimization algorithm has shown to deliver state-of-the-art results with optimal convergence rate. In this work we present an algorithm for l1 minimization that operates in the null space of the measurement matrix and follows a Nesterov-accelerated gradient descent structure. Restriction to the null space allows the algorithm to operate in a minimal-dimension subspace. A further novelty lies on the fact that the cost function is no longer the l1 norm of the temporal solution, but a weighted sum of its entropy and its l1 norm. The inclusion of the entropy pushes the l1 minimization towards a de facto quasi-10 minimization, while the l1 norm term avoids divergence. Our algorithm globally outperforms C&P and other recent approaches for l1 minimization in terms of l2reconstruction error, including a different entropy-based method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Independent Deeply Learned Matrix Analysis for Multichannel Audio Source Separation.\n \n \n \n \n\n\n \n Mogami, S.; Sumino, H.; Kitamura, D.; Takamune, N.; Takamichi, S.; Saruwatari, H.; and Ono, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1557-1561, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IndependentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553246,\n  author = {S. Mogami and H. Sumino and D. Kitamura and N. Takamune and S. Takamichi and H. Saruwatari and N. Ono},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Independent Deeply Learned Matrix Analysis for Multichannel Audio Source Separation},\n  year = {2018},\n  pages = {1557-1561},\n  abstract = {In this paper, we address a multichannel audio source separation task and propose a new efficient method called independent deeply learned matrix analysis (IDLMA). IDLMA estimates the demixing matrix in a blind manner and updates the time-frequency structures of each source using a pretrained deep neural network (DNN). Also, we introduce a complex Student's t-distribution as a generalized source generative model including both complex Gaussian and Cauchy distributions. Experiments are conducted using music signals with a training dataset, and the results show the validity of the proposed method in terms of separation accuracy and compnutational cost.},\n  keywords = {audio signal processing;Gaussian distribution;learning (artificial intelligence);matrix algebra;neural nets;source separation;time-frequency analysis;independent deeply learned matrix analysis;multichannel audio source separation task;IDLMA;pretrained deep neural network;generalized source generative model;demixing matrix estimation;time-frequency structures;DNN;complex Student's t-distribution;Cauchy distributions;complex Gaussian distributions;Spectrogram;Source separation;Time-frequency analysis;Optimization;Covariance matrices;Computational modeling;Estimation;multichannel audio source separation;independent component analysis;deep neural networks},\n  doi = {10.23919/EUSIPCO.2018.8553246},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436568.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address a multichannel audio source separation task and propose a new efficient method called independent deeply learned matrix analysis (IDLMA). IDLMA estimates the demixing matrix in a blind manner and updates the time-frequency structures of each source using a pretrained deep neural network (DNN). Also, we introduce a complex Student's t-distribution as a generalized source generative model including both complex Gaussian and Cauchy distributions. Experiments are conducted using music signals with a training dataset, and the results show the validity of the proposed method in terms of separation accuracy and compnutational cost.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convolutional Recurrent Neural Networks for Urban Sound Classification Using Raw Waveforms.\n \n \n \n \n\n\n \n Sang, J.; Park, S.; and Lee, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2444-2448, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ConvolutionalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553247,\n  author = {J. Sang and S. Park and J. Lee},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Convolutional Recurrent Neural Networks for Urban Sound Classification Using Raw Waveforms},\n  year = {2018},\n  pages = {2444-2448},\n  abstract = {Recent studies have demonstrated deep learning approaches directly from raw data have been successfully used in image and text. This approach has been applied to audio signals as well but not fully explored yet. In this works, we propose a convolutional recurrent neural network that directly uses time-domain waveforms as input in the domain of urban sound classification. Convolutional recurrent neural network is combined model of convolutional neural networks for extracting sound features and recurrent neural networks for temporal aggregation of the extracted features. The method was evaluated using the UrbanSound8k dataset, the largest public dataset of urban environmental sound sources available for research. The results show how convolutional recurrent neural network with raw waveforms improve the accuracy in urban sound classification and provide effectiveness of its structure with respect to the number of parameters.},\n  keywords = {acoustic signal processing;audio signal processing;feature extraction;feedforward neural nets;learning (artificial intelligence);recurrent neural nets;waveform analysis;raw waveforms;urban sound classification;convolutional recurrent neural network;recurrent neural networks;audio signals;time-domain waveforms;temporal aggregation;feature extraction;Convolution;Feature extraction;Recurrent neural networks;Computer architecture;Task analysis;Training},\n  doi = {10.23919/EUSIPCO.2018.8553247},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435095.pdf},\n}\n\n
\n
\n\n\n
\n Recent studies have demonstrated deep learning approaches directly from raw data have been successfully used in image and text. This approach has been applied to audio signals as well but not fully explored yet. In this works, we propose a convolutional recurrent neural network that directly uses time-domain waveforms as input in the domain of urban sound classification. Convolutional recurrent neural network is combined model of convolutional neural networks for extracting sound features and recurrent neural networks for temporal aggregation of the extracted features. The method was evaluated using the UrbanSound8k dataset, the largest public dataset of urban environmental sound sources available for research. The results show how convolutional recurrent neural network with raw waveforms improve the accuracy in urban sound classification and provide effectiveness of its structure with respect to the number of parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Detection and Estimation of Unmodeled Chirps.\n \n \n \n \n\n\n \n Mohanty, S. D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2643-2647, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DetectionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553248,\n  author = {S. D. Mohanty},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Detection and Estimation of Unmodeled Chirps},\n  year = {2018},\n  pages = {2643-2647},\n  abstract = {The detection and estimation of transient chirp signals with unmodeled amplitude envelope and instantaneous frequency evolution is a significant challenge in gravitational wave (GW) data analysis. We review a recently introduced method that addresses this challenge using a spline-based approach. The applicability of this method to the important problem of removing non-transient chirps in GW data, namely narrowband noise of instrumental and terrestrial origin, is investigated.},\n  keywords = {data analysis;gravitational waves;reviews;splines (mathematics);unmodeled chirps;unmodeled amplitude envelope;instantaneous frequency evolution;gravitational wave data analysis;spline-based approach;nontransient chirps;GW data;transient chirp signal estimation;transient chirp signal detection;review;narrowband noise;Chirp;Splines (mathematics);Time-frequency analysis;Estimation;Transient analysis;Signal to noise ratio;Frequency estimation},\n  doi = {10.23919/EUSIPCO.2018.8553248},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437579.pdf},\n}\n\n
\n
\n\n\n
\n The detection and estimation of transient chirp signals with unmodeled amplitude envelope and instantaneous frequency evolution is a significant challenge in gravitational wave (GW) data analysis. We review a recently introduced method that addresses this challenge using a spline-based approach. The applicability of this method to the important problem of removing non-transient chirps in GW data, namely narrowband noise of instrumental and terrestrial origin, is investigated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generalized Conditional Maximum Likelihood Estimators in the Large Sample Regime.\n \n \n \n \n\n\n \n Chaumette, E.; Vincent, F.; Renaux, A.; and Galy, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 271-275, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GeneralizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553249,\n  author = {E. Chaumette and F. Vincent and A. Renaux and J. Galy},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Generalized Conditional Maximum Likelihood Estimators in the Large Sample Regime},\n  year = {2018},\n  pages = {271-275},\n  abstract = {In modern array processing or spectral analysis, mostly two different signal models are considered: the conditional signal model (CSM) and the unconditional signal model. The discussed signal models are Gaussian and the signal sources parameters are connected either with the expectation value in the conditional case or with the covariance matrix in the unconditional one. We focus on the CSM resulting from several observations of partially coherent signal sources whose amplitudes undergo a Gaussian random walk between observations. In the proposed generalized CSM, the signal sources parameters become connected with both the expectation value and the covariance matrix. Even though an analytical expression of the associated generalized conditional maximum likelihood estimators (GCM-LEs) can be easily exhibited, it does not allow computation of GCMLEs in the large sample regime. As a main contribution, we introduce a recursive form of the GCMLEs which allows their computation whatever the number of observations combined. This recursive form paves the way to assess the effect of partially coherent amplitudes on GCMLEs mean-squared error in the large sample regime. Interestingly, we exhibit non consistent GMLEs in the large sample regime.},\n  keywords = {array signal processing;covariance matrices;Gaussian processes;maximum likelihood estimation;mean square error methods;random processes;signal sources;spectral analysis;modern array processing;spectral analysis;unconditional signal model;signal sources parameters;expectation value;conditional case;covariance matrix;mean-squared error;large sample regime;partially coherent signal sources;signal models;partially coherent amplitudes;GCMLEs;associated generalized conditional maximum likelihood estimators;generalized CSM;Gaussian random walk;Covariance matrices;Computational modeling;Mathematical model;Europe;Signal processing;Arrays;Analytical models},\n  doi = {10.23919/EUSIPCO.2018.8553249},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434964.pdf},\n}\n\n
\n
\n\n\n
\n In modern array processing or spectral analysis, mostly two different signal models are considered: the conditional signal model (CSM) and the unconditional signal model. The discussed signal models are Gaussian and the signal sources parameters are connected either with the expectation value in the conditional case or with the covariance matrix in the unconditional one. We focus on the CSM resulting from several observations of partially coherent signal sources whose amplitudes undergo a Gaussian random walk between observations. In the proposed generalized CSM, the signal sources parameters become connected with both the expectation value and the covariance matrix. Even though an analytical expression of the associated generalized conditional maximum likelihood estimators (GCM-LEs) can be easily exhibited, it does not allow computation of GCMLEs in the large sample regime. As a main contribution, we introduce a recursive form of the GCMLEs which allows their computation whatever the number of observations combined. This recursive form paves the way to assess the effect of partially coherent amplitudes on GCMLEs mean-squared error in the large sample regime. Interestingly, we exhibit non consistent GMLEs in the large sample regime.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Prior Influence on Weiss-Weinstein Bounds for Multiple Change- Point Estimation.\n \n \n \n\n\n \n Bacharach, L.; Renaux, A.; and El Korso, M. N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1627-1631, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553250,\n  author = {L. Bacharach and A. Renaux and M. N. {El Korso}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Prior Influence on Weiss-Weinstein Bounds for Multiple Change- Point Estimation},\n  year = {2018},\n  pages = {1627-1631},\n  abstract = {In this paper, we study the influence of the prior distribution on the Weiss-Weinstein bound (WWB) in the ubiquitous multiple change-point estimation problem. As by product, we derive a closed form expressions of the WWB for different commonly used prior. Numerical results reveal some insightful properties and corroborate the proposed theoretical analysis.},\n  keywords = {Bayes methods;maximum likelihood estimation;mean square error methods;parameter estimation;closed form expressions;ubiquitous multiple change-point estimation problem;WWB;Weiss-Weinstein bound;Estimation;Bayes methods;Europe;Mean square error methods;Signal to noise ratio;Time series analysis},\n  doi = {10.23919/EUSIPCO.2018.8553250},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this paper, we study the influence of the prior distribution on the Weiss-Weinstein bound (WWB) in the ubiquitous multiple change-point estimation problem. As by product, we derive a closed form expressions of the WWB for different commonly used prior. Numerical results reveal some insightful properties and corroborate the proposed theoretical analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Visual Localization in the Presence of Appearance Changes Using the Partial Order Kernel.\n \n \n \n \n\n\n \n Abdollahyan, M.; Cascianelli, S.; Bellocchio, E.; Costante, G.; Ciarfuglia, T. A.; Bianconi, F.; Smeraldi, F.; and Fravolini, M. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 697-701, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"VisualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553252,\n  author = {M. Abdollahyan and S. Cascianelli and E. Bellocchio and G. Costante and T. A. Ciarfuglia and F. Bianconi and F. Smeraldi and M. L. Fravolini},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Visual Localization in the Presence of Appearance Changes Using the Partial Order Kernel},\n  year = {2018},\n  pages = {697-701},\n  abstract = {Visual localization across seasons and under varying weather and lighting conditions is a challenging task in robotics. In this paper, we present a new sequence-based approach to visual localization using the Partial Order Kernel (POKer), a convolution kernel for string comparison, that is able to handle appearance changes and is robust to speed variations. We use multiple sequence alignment to construct directed acyclic graph representations of the database image sequences, where sequences of images of the same place acquired at different times are represented as alternative paths in a graph. We then use the POKer to compute the pairwise similarities between these graphs and the query image sequences obtained in a subsequent traversal of the environment, and match the corresponding locations. We evaluated our approach on a dataset which features extreme appearance variations due to seasonal changes. The results demonstrate the effectiveness of our approach, where it achieves higher precision and recall than two state-of-the-art baseline methods.},\n  keywords = {directed graphs;graph theory;image sequences;object tracking;visual databases;database image sequences;POKer;query image sequences;extreme appearance variations;seasonal changes;visual localization;appearance changes;Partial Order Kernel;weather;lighting conditions;sequence-based approach;convolution kernel;multiple sequence alignment;graph representations;Image sequences;Databases;Kernel;Europe;Signal processing;Visualization;Lighting;visual localization;partial order graphs;kernel methods},\n  doi = {10.23919/EUSIPCO.2018.8553252},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436699.pdf},\n}\n\n
\n
\n\n\n
\n Visual localization across seasons and under varying weather and lighting conditions is a challenging task in robotics. In this paper, we present a new sequence-based approach to visual localization using the Partial Order Kernel (POKer), a convolution kernel for string comparison, that is able to handle appearance changes and is robust to speed variations. We use multiple sequence alignment to construct directed acyclic graph representations of the database image sequences, where sequences of images of the same place acquired at different times are represented as alternative paths in a graph. We then use the POKer to compute the pairwise similarities between these graphs and the query image sequences obtained in a subsequent traversal of the environment, and match the corresponding locations. We evaluated our approach on a dataset which features extreme appearance variations due to seasonal changes. The results demonstrate the effectiveness of our approach, where it achieves higher precision and recall than two state-of-the-art baseline methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ensemble Learning for Detection of Short Episodes of Atrial Fibrillation.\n \n \n \n \n\n\n \n Peimankar, A.; and Puthusserypady, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 66-70, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EnsemblePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553253,\n  author = {A. Peimankar and S. Puthusserypady},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Ensemble Learning for Detection of Short Episodes of Atrial Fibrillation},\n  year = {2018},\n  pages = {66-70},\n  abstract = {Early detection of atrial fibrillation (AF) is of great importance to cardiologists in order to help patients suffer from chronic cardiac arrhythmias. This paper proposes a novel algorithm to detect short episodes of atrial fibrillation (AF) using an ensemble framework. Several features are extracted from long term electrocardiogram (ECG) signals based on the heart rate variability (HRV). The most significant subset of features are selected as inputs to the four classifiers. Outputs of these classifiers are then combined for the final detection of the AF episodes. Results from an extensive analysis of the proposed algorithm show high classification accuracy (around 85 %) and sensitivity (around 92 %) for classifying very short episodes of AF (10 beats per segment, which is approximately 6 seconds). The accuracy and sensitivity of the proposed algorithm are improved significantly to 96.46 % and 94 %, respectively, for slightly longer episodes (60 beats per segment) of AF. Compared to the state-of-the-art algorithms, the proposed method shows the potential to pave the way to extend to real-time AF detection applications.},\n  keywords = {electrocardiography;learning (artificial intelligence);medical signal processing;signal classification;classifiers;real-time AF detection applications;atrial fibrillation;chronic cardiac arrhythmias;heart rate variability;ensemble learning;long term electrocardiogram signals;ECG signals;Rail to rail inputs;Signal processing algorithms;Electrocardiography;Training;Feature extraction;Heart rate variability;Classification algorithms;Electrocardiogram (ECG);Ensemble learning;Atrial fibrillation;Feature selection;Classification},\n  doi = {10.23919/EUSIPCO.2018.8553253},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437934.pdf},\n}\n\n
\n
\n\n\n
\n Early detection of atrial fibrillation (AF) is of great importance to cardiologists in order to help patients suffer from chronic cardiac arrhythmias. This paper proposes a novel algorithm to detect short episodes of atrial fibrillation (AF) using an ensemble framework. Several features are extracted from long term electrocardiogram (ECG) signals based on the heart rate variability (HRV). The most significant subset of features are selected as inputs to the four classifiers. Outputs of these classifiers are then combined for the final detection of the AF episodes. Results from an extensive analysis of the proposed algorithm show high classification accuracy (around 85 %) and sensitivity (around 92 %) for classifying very short episodes of AF (10 beats per segment, which is approximately 6 seconds). The accuracy and sensitivity of the proposed algorithm are improved significantly to 96.46 % and 94 %, respectively, for slightly longer episodes (60 beats per segment) of AF. Compared to the state-of-the-art algorithms, the proposed method shows the potential to pave the way to extend to real-time AF detection applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse Method for Tip-Timing Signals Analysis with Non Stationary Engine Rotation Frequency.\n \n \n \n \n\n\n \n Bouchain, A.; Vercoutter, A.; Picheral, J.; Talon, A.; and Lahalle, E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1730-1734, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553254,\n  author = {A. Bouchain and A. Vercoutter and J. Picheral and A. Talon and E. Lahalle},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse Method for Tip-Timing Signals Analysis with Non Stationary Engine Rotation Frequency},\n  year = {2018},\n  pages = {1730-1734},\n  abstract = {Blades vibrations must be measured in operations to validate blade design. Tip-timing is one of the classical measurement methods but its main drawback is the generation of sub-sampled and non-uniform sampled signals. This paper presents a new sparse method for tip-timing spectral analysis that makes use of engine rotation variations. Assuming that blade vibration signals yield to line spectra, a sparse signal model is introduced as a linear system. The solution to the problem is obtained by ADMM (Alternating Direction Method of Multipliers) with a p1 -regularization. Results for simulated and real signals are given to illustrate the efficiency of this method. The main advantages of the proposed method are to provide a fast solution and to take into account the variations of the rotation speed. Results show that this approach reduces frequency aliasings caused by the low sampling frequency of the measured signals.},\n  keywords = {blades;compressed sensing;engines;signal reconstruction;signal sampling;spectral analysis;vibrational signal processing;tip-timing signal analysis;blade vibration signals;frequency aliasings;sparse signal model;nonstationary engine rotation frequency;Blades;Vibrations;Probes;Engines;Frequency measurement;Frequency modulation;Estimation},\n  doi = {10.23919/EUSIPCO.2018.8553254},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437292.pdf},\n}\n\n
\n
\n\n\n
\n Blades vibrations must be measured in operations to validate blade design. Tip-timing is one of the classical measurement methods but its main drawback is the generation of sub-sampled and non-uniform sampled signals. This paper presents a new sparse method for tip-timing spectral analysis that makes use of engine rotation variations. Assuming that blade vibration signals yield to line spectra, a sparse signal model is introduced as a linear system. The solution to the problem is obtained by ADMM (Alternating Direction Method of Multipliers) with a p1 -regularization. Results for simulated and real signals are given to illustrate the efficiency of this method. The main advantages of the proposed method are to provide a fast solution and to take into account the variations of the rotation speed. Results show that this approach reduces frequency aliasings caused by the low sampling frequency of the measured signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Time Modulated Array Controlled by Periodic Pulsed Signals.\n \n \n \n \n\n\n \n Maneiro-Catoira, R.; Brégains, J.; García-Naya, J. A.; and Castedo, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 637-641, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553255,\n  author = {R. Maneiro-Catoira and J. Brégains and J. A. García-Naya and L. Castedo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Time Modulated Array Controlled by Periodic Pulsed Signals},\n  year = {2018},\n  pages = {637-641},\n  abstract = {During the last years, time-modulated array (TMA) architectures have been considering different switching network designs, focusing primarily on the radiation pattern synthesis. Unfortunately, switching networks exhibit a noticeable energy leakage due to the time the array elements remain off, yielding a strong degradation of the signal-to-noise ratio. To improve the corresponding network efficiency, configurations with multiple switches per antenna element, or even with single-pole multiple-throw switches, have been proposed. However, network switching efficiency improvements sacrifice beamforming flexibility. To overcome such a limitation, recent works have proposed to replace the switches with variable-gain amplifiers controlled by time-variant pulses based exclusively on sinusoids. In this work, we describe a TMA architecture based on ultra-wide band analog multipliers which is suitable for all kind of periodic pulses.},\n  keywords = {array signal processing;phase shifters;switches;switching networks;periodic pulsed signals;time-modulated array architectures;radiation pattern synthesis;switching networks;array elements;signal-to-noise ratio;multiple switches;antenna element;single-pole multiple-throw switches;efficiency improvements;time-variant pulses;TMA architecture;periodic pulses;energy leakage;network efficiency;switching network designs;Harmonic analysis;Array signal processing;Antenna radiation patterns;Switches;Modulation;Europe;Antenna arrays;time-modulated arrays;beamforming},\n  doi = {10.23919/EUSIPCO.2018.8553255},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437925.pdf},\n}\n\n
\n
\n\n\n
\n During the last years, time-modulated array (TMA) architectures have been considering different switching network designs, focusing primarily on the radiation pattern synthesis. Unfortunately, switching networks exhibit a noticeable energy leakage due to the time the array elements remain off, yielding a strong degradation of the signal-to-noise ratio. To improve the corresponding network efficiency, configurations with multiple switches per antenna element, or even with single-pole multiple-throw switches, have been proposed. However, network switching efficiency improvements sacrifice beamforming flexibility. To overcome such a limitation, recent works have proposed to replace the switches with variable-gain amplifiers controlled by time-variant pulses based exclusively on sinusoids. In this work, we describe a TMA architecture based on ultra-wide band analog multipliers which is suitable for all kind of periodic pulses.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DQLC Optimization for Joint Source Channel Coding of Correlated Sources over Fading MAC.\n \n \n \n \n\n\n \n Suárez-Casal, P.; Fresnedo, O.; and Castedo, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1292-1296, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DQLCPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553256,\n  author = {P. Suárez-Casal and O. Fresnedo and L. Castedo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {DQLC Optimization for Joint Source Channel Coding of Correlated Sources over Fading MAC},\n  year = {2018},\n  pages = {1292-1296},\n  abstract = {Distributed Quantizer Linear Coding (DQLC) is a joint source-channel coding scheme that encodes and transmits distributed Gaussian sources over a MAC under severe delay constraints, providing significant gains when compared to uncoded transmissions. DQLC, however, relies on the appropriate optimization of its parameters depending on source correlation, channel state and noise variance. In this work, we propose a parameter optimization strategy that relies on the lattice structure of the mapping, reduces the number of parameters to estimate, and exhibits lower computational complexity.},\n  keywords = {combined source-channel coding;computational complexity;fading channels;Gaussian distribution;multi-access systems;multiuser channels;optimisation;source correlation;parameter optimization strategy;DQLC optimization;fading MAC;joint source-channel coding scheme;joint source channel coding;distributed quantizer linear coding optimization;distributed Gaussian source encoding;distributed Gaussian source transmission;computational complexity;fading multiple access channel;Decoding;Optimization;Encoding;Fading channels;Correlation;Lattices;Covariance matrices},\n  doi = {10.23919/EUSIPCO.2018.8553256},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437672.pdf},\n}\n\n
\n
\n\n\n
\n Distributed Quantizer Linear Coding (DQLC) is a joint source-channel coding scheme that encodes and transmits distributed Gaussian sources over a MAC under severe delay constraints, providing significant gains when compared to uncoded transmissions. DQLC, however, relies on the appropriate optimization of its parameters depending on source correlation, channel state and noise variance. In this work, we propose a parameter optimization strategy that relies on the lattice structure of the mapping, reduces the number of parameters to estimate, and exhibits lower computational complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speaking Rate Changes Affect Phone Durations Differently for Neutral and Emotional Speech.\n \n \n \n \n\n\n \n Gao, Y.; and Birkholz, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2070-2074, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpeakingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553257,\n  author = {Y. Gao and P. Birkholz},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Speaking Rate Changes Affect Phone Durations Differently for Neutral and Emotional Speech},\n  year = {2018},\n  pages = {2070-2074},\n  abstract = {In this study we examined whether phone durations are affected differently when the speaking rate changes in neutral and emotional speech. To that end, we analyzed two sets of sentences: In the first set, each sentence was spoken with explicitly different speaking rates (slow, normal, fast) with a neutral emotion. In the second set, each sentence was spoken with seven emotions with implicitly different speaking rates. For each spoken sentence, the mean value and the standard deviation of phone durations were related to those of the corresponding normal or neutral counterparts. We found that the normalized relative standard deviation (NRSD) did not solely depend on the speaking rate, but also on the emotion. Based on these findings, we analyzed the listening effort and the naturalness of synthetic utterances, where the mean and the standard deviation of phone durations were modified independently of each other. For fast speech, listening effort and naturalness improved significantly, when the standard deviation of phone durations was reduced less than the mean phone durations. These results can be applied to time-compression strategies for synthetic speech, voice morphing, and realistic synthesis of emotional speech.},\n  keywords = {emotion recognition;speech synthesis;speaking rate changes;emotional speech;neutral emotion;spoken sentence;normalized relative standard deviation;mean phone durations;synthetic speech;synthetic utterances;voice morphing;time-compression strategies;neutral speech;Standards;Speech;Databases;Timing;Europe;Speech processing;speech synthesis;time compression and expansion;variability of duration;listening effort;naturalness},\n  doi = {10.23919/EUSIPCO.2018.8553257},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570430385.pdf},\n}\n\n
\n
\n\n\n
\n In this study we examined whether phone durations are affected differently when the speaking rate changes in neutral and emotional speech. To that end, we analyzed two sets of sentences: In the first set, each sentence was spoken with explicitly different speaking rates (slow, normal, fast) with a neutral emotion. In the second set, each sentence was spoken with seven emotions with implicitly different speaking rates. For each spoken sentence, the mean value and the standard deviation of phone durations were related to those of the corresponding normal or neutral counterparts. We found that the normalized relative standard deviation (NRSD) did not solely depend on the speaking rate, but also on the emotion. Based on these findings, we analyzed the listening effort and the naturalness of synthetic utterances, where the mean and the standard deviation of phone durations were modified independently of each other. For fast speech, listening effort and naturalness improved significantly, when the standard deviation of phone durations was reduced less than the mean phone durations. These results can be applied to time-compression strategies for synthetic speech, voice morphing, and realistic synthesis of emotional speech.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Minimum Length Solution for One-Dimensional Discrete Phase Retrieval Problem.\n \n \n \n \n\n\n \n Rusu, C.; and Astola, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 470-474, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MinimumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553258,\n  author = {C. Rusu and J. Astola},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Minimum Length Solution for One-Dimensional Discrete Phase Retrieval Problem},\n  year = {2018},\n  pages = {470-474},\n  abstract = {Recently it has been shown that the one-dimensional discrete phase retrieval problem may not have always a causal solution for certain input magnitude data, but it has been proved that the extended form of the one-dimensional discrete phase retrieval problem has always a causal solution within the same conditions. In this work we are looking for the minimum length solution for one-dimensional discrete phase retrieval problem. The Non-uniform Discrete Fourier Transform based approach is introduced and experimental results are also presented.},\n  keywords = {discrete Fourier transforms;signal restoration;one-dimensional discrete phase retrieval problem;minimum length solution;causal solution;nonuniform discrete Fourier transform;Discrete Fourier transforms;Signal processing;Frequency conversion;Europe;Iterative algorithms;Correlation},\n  doi = {10.23919/EUSIPCO.2018.8553258},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437595.pdf},\n}\n\n
\n
\n\n\n
\n Recently it has been shown that the one-dimensional discrete phase retrieval problem may not have always a causal solution for certain input magnitude data, but it has been proved that the extended form of the one-dimensional discrete phase retrieval problem has always a causal solution within the same conditions. In this work we are looking for the minimum length solution for one-dimensional discrete phase retrieval problem. The Non-uniform Discrete Fourier Transform based approach is introduced and experimental results are also presented.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Anti-Forensics of JPEG Compression Using Generative Adversarial Networks.\n \n \n \n \n\n\n \n Luo, Y.; Zi, H.; Zhang, Q.; and Kang, X.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 952-956, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Anti-ForensicsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553259,\n  author = {Y. Luo and H. Zi and Q. Zhang and X. Kang},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Anti-Forensics of JPEG Compression Using Generative Adversarial Networks},\n  year = {2018},\n  pages = {952-956},\n  abstract = {JPEG compression is one of the most popular image compression methods. The manipulation history of JPEG compression provides evidence of operational information about the device or software utilized to generate an image. Besides, JPEG compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring, which lower the image visual quality. Therefore, erasing the traces left by JPEG compression is of great importance. To solve this problem, we present a JPEG compression anti-forensic method adopting the framework of generative adversarial networks (GANs). This architecture consists of a generator and a discriminator, where the generator can automatically learn how to hide the traces left by JPEG compression during the optimization process against the discriminator. Through extensive experiments, it is demonstrated that the anti-forensically modified images generated by our method can deceive the existing JPEG compression detectors and have very good visual quality.},\n  keywords = {data compression;image coding;generative adversarial networks;complex compression artifacts;JPEG compression anti-forensic method;image compression methods;JPEG compression detectors;visual quality;Image coding;Transform coding;Generators;Training;Generative adversarial networks;Detectors;Convolution;JPEG compression;anti-forensics;generative adversarial networks},\n  doi = {10.23919/EUSIPCO.2018.8553259},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437133.pdf},\n}\n\n
\n
\n\n\n
\n JPEG compression is one of the most popular image compression methods. The manipulation history of JPEG compression provides evidence of operational information about the device or software utilized to generate an image. Besides, JPEG compression introduces complex compression artifacts, particularly the blocking artifacts, ringing effects and blurring, which lower the image visual quality. Therefore, erasing the traces left by JPEG compression is of great importance. To solve this problem, we present a JPEG compression anti-forensic method adopting the framework of generative adversarial networks (GANs). This architecture consists of a generator and a discriminator, where the generator can automatically learn how to hide the traces left by JPEG compression during the optimization process against the discriminator. Through extensive experiments, it is demonstrated that the anti-forensically modified images generated by our method can deceive the existing JPEG compression detectors and have very good visual quality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Sequence-Filter Joint Optimization.\n \n \n \n \n\n\n \n Tan, U.; Rabastc, O.; Adnet, C.; and Ovarlez, J. -.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2335-2339, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553260,\n  author = {U. Tan and O. Rabastc and C. Adnet and J. -. Ovarlez},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Sequence-Filter Joint Optimization},\n  year = {2018},\n  pages = {2335-2339},\n  abstract = {This paper deals with the optimisation of a sequence and its associated mismatched filter. This question has already been addressed in the literature, by alternatively solving two minimisation problem, one per sequence, meaning that the optimisation is never performed on both sequences at the same time. So, this article introduces some methods in order to optimise jointly (i.e. simultaneously) a sequence and its filter. First, a gradient descent is basically applied on both sequences. Second, another algorithm studies an objective function that includes the optimal mismatched filter that minimises the Integrated Sidelobe Level (or the Peak-to-Sidelobe Level Ratio). Simulations show promising results in terms of sidelobes: as expected, a joint optimisation seems to perform better than a separate one. As both methods behave differently, their choice will depend on the applications.},\n  keywords = {filtering theory;gradient methods;minimisation;Peak-to-Sidelobe Level Ratio;sequence-filter joint optimization;associated mismatched filter;minimisation problem;optimal mismatched filter;Integrated Sidelobe Level;objective function;gradient descent;Cost function;Signal to noise ratio;Europe;Signal processing algorithms;Linear programming;Gradient descent;mismatched filter;optimisation methods;waveform design},\n  doi = {10.23919/EUSIPCO.2018.8553260},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437915.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the optimisation of a sequence and its associated mismatched filter. This question has already been addressed in the literature, by alternatively solving two minimisation problem, one per sequence, meaning that the optimisation is never performed on both sequences at the same time. So, this article introduces some methods in order to optimise jointly (i.e. simultaneously) a sequence and its filter. First, a gradient descent is basically applied on both sequences. Second, another algorithm studies an objective function that includes the optimal mismatched filter that minimises the Integrated Sidelobe Level (or the Peak-to-Sidelobe Level Ratio). Simulations show promising results in terms of sidelobes: as expected, a joint optimisation seems to perform better than a separate one. As both methods behave differently, their choice will depend on the applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n How Many Channels are Enough? Evaluation of Tonic Cranial Muscle Artefact Reduction Using ICA with Different Numbers of EEG Channels.\n \n \n \n \n\n\n \n Janani, A. S.; Grummett, T. S.; Bakhshayesh, H.; Lewis, T. W.; Willoughby, J. O.; and Pope, K. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 101-105, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"HowPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553261,\n  author = {A. S. Janani and T. S. Grummett and H. Bakhshayesh and T. W. Lewis and J. O. Willoughby and K. J. Pope},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {How Many Channels are Enough? Evaluation of Tonic Cranial Muscle Artefact Reduction Using ICA with Different Numbers of EEG Channels},\n  year = {2018},\n  pages = {101-105},\n  abstract = {Scalp electrical recordings, or electroencephalograms (EEG), are heavily contaminated by cranial and cervical muscle activity from as low as 20 hertz, even in relaxed conditions. It is therefore necessary to reduce or remove this contamination to enable reliable exploration of brain neurophysiological responses. Scalp measurements record activity from many sources, including neural and muscular. Independent Component Analysis (ICA) produces components ideally corresponding to separate sources, but the number of components is limited by the number of EEG channels. In practice, at most 30% of components are cleanly separate sources. Increasing the number of channels results in more separate components, but with a significant increase in costs of data collection and computation. Here we present results to assist in selecting an appropriate number of channels. Our unique database of pharmacologically paralysed subjects provides a way to objectively compare different approaches to achieving an ideal, muscle free EEG recording. We evaluated an automatic muscle-removing approach, based on ICA, with different numbers of EEG channels: 21, 32, 64, and 115. Our results show that, for a fixed length of data, 21 channels is insufficient to reduce tonic muscle artefact, and that increasing the number of channels to 115 does result in better tonic muscle artefact reduction.},\n  keywords = {electroencephalography;independent component analysis;medical signal processing;muscle;neurophysiology;source separation;tonic cranial muscle artefact reduction;ICA;EEG channels;scalp electrical recordings;cranial muscle activity;cervical muscle activity;relaxed conditions;contamination;brain neurophysiological responses;Independent Component Analysis;data collection;computation;muscle free EEG recording;automatic muscle-removing approach;tonic muscle artefact reduction;pharmacologically paralysed subjects;electroencephalograms;scalp measurements;frequency 20.0 Hz;Muscles;Electroencephalography;Task analysis;Electromyography;Cranial;Australia;Pollution measurement;Independent Component Analysis;muscle reduction;number of channels;electroencephalogram;electromyogram},\n  doi = {10.23919/EUSIPCO.2018.8553261},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436901.pdf},\n}\n\n
\n
\n\n\n
\n Scalp electrical recordings, or electroencephalograms (EEG), are heavily contaminated by cranial and cervical muscle activity from as low as 20 hertz, even in relaxed conditions. It is therefore necessary to reduce or remove this contamination to enable reliable exploration of brain neurophysiological responses. Scalp measurements record activity from many sources, including neural and muscular. Independent Component Analysis (ICA) produces components ideally corresponding to separate sources, but the number of components is limited by the number of EEG channels. In practice, at most 30% of components are cleanly separate sources. Increasing the number of channels results in more separate components, but with a significant increase in costs of data collection and computation. Here we present results to assist in selecting an appropriate number of channels. Our unique database of pharmacologically paralysed subjects provides a way to objectively compare different approaches to achieving an ideal, muscle free EEG recording. We evaluated an automatic muscle-removing approach, based on ICA, with different numbers of EEG channels: 21, 32, 64, and 115. Our results show that, for a fixed length of data, 21 channels is insufficient to reduce tonic muscle artefact, and that increasing the number of channels to 115 does result in better tonic muscle artefact reduction.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast Multi-Lane Detection and Modeling for Embedded Platforms.\n \n \n \n \n\n\n \n Nieto, M.; Garcia, L.; Scnderos, O.; and Otaegui, O.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1032-1036, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553262,\n  author = {M. Nieto and L. Garcia and O. Scnderos and O. Otaegui},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Multi-Lane Detection and Modeling for Embedded Platforms},\n  year = {2018},\n  pages = {1032-1036},\n  abstract = {Most Advanced Driver Assistance Systems (ADAS) or Autonomous Driving (AD) functions require the ability to perceive the road and its elements around the ego-vehicle. The precise localization of other road participants (e.g. vehicles, pedestrians, traffic signs) is demanded at lane level, to enable higher semantic analysis of the scene. This requires a lane detection and modeling stage able to provide the number of existing lanes, and their precise local geometry. The current trend in computer vision is to use the full power of GPU technology with deep learning-based detection methods, which requires costly high-end platforms, and difficult the co-existence with other heavy-processing functions (e.g. vehicle detection), specially critical when considering a single platform processing multiple cameras and Laser scanners. In this paper we propose an efficient lane detection and modeling pipeline, composed of optimized steps for segmentation, transformation, modeling, control and tracking. The method is able to detect multiple lanes and their curvature, in continuous function, with minimal processing power requirements, thus enabling its implementation into low-cost embedded platforms. Experimental results support our claims, and demonstrate that the proposed method outperforms other methods in the literature in computational cost, while keeping good accuracy results for a variety of road types.},\n  keywords = {cameras;computer vision;driver information systems;embedded systems;learning (artificial intelligence);object detection;road traffic;road vehicles;fast multilane detection;ADAS;ego-vehicle;precise localization;road participants;traffic signs;lane level;modeling stage;precise local geometry;computer vision;GPU technology;deep learning-based detection methods;high-end platforms;heavy-processing functions;single platform processing multiple cameras;efficient lane detection;modeling pipeline;multiple lanes;continuous function;minimal processing power requirements;low-cost embedded platforms;computational cost;road types;semantic analysis;Computational modeling;Roads;Pipelines;Computational efficiency;Analytical models;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553262},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437144.pdf},\n}\n\n
\n
\n\n\n
\n Most Advanced Driver Assistance Systems (ADAS) or Autonomous Driving (AD) functions require the ability to perceive the road and its elements around the ego-vehicle. The precise localization of other road participants (e.g. vehicles, pedestrians, traffic signs) is demanded at lane level, to enable higher semantic analysis of the scene. This requires a lane detection and modeling stage able to provide the number of existing lanes, and their precise local geometry. The current trend in computer vision is to use the full power of GPU technology with deep learning-based detection methods, which requires costly high-end platforms, and difficult the co-existence with other heavy-processing functions (e.g. vehicle detection), specially critical when considering a single platform processing multiple cameras and Laser scanners. In this paper we propose an efficient lane detection and modeling pipeline, composed of optimized steps for segmentation, transformation, modeling, control and tracking. The method is able to detect multiple lanes and their curvature, in continuous function, with minimal processing power requirements, thus enabling its implementation into low-cost embedded platforms. Experimental results support our claims, and demonstrate that the proposed method outperforms other methods in the literature in computational cost, while keeping good accuracy results for a variety of road types.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Coupled Autoencoder Based Reconstruction of Images from Compressively Sampled Measurements.\n \n \n \n \n\n\n \n Gupta, K.; and Bhowmick, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1067-1071, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CoupledPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553263,\n  author = {K. Gupta and B. Bhowmick},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Coupled Autoencoder Based Reconstruction of Images from Compressively Sampled Measurements},\n  year = {2018},\n  pages = {1067-1071},\n  abstract = {This work addresses the problem of reconstructing images from their lower dimensional random projections using Coupled Autoencoder (CAE). Traditionally, Compressed Sensing (CS) based techniques have been employed for this task. CS based techniques are iterative in nature; hence inversion process is time-consuming and cannot be deployed for the real-time reconstruction process. These inversion processes are transductive in nature. With the recent development in deep learning - auto encoders, CNN based architectures have been used for learning inversion in an inductive setup. The training period for inductive learning is large but is very fast during application. But these approaches work only on the signal domain and not on the measurement domain. We show the application of CAE, which can work directly from the measurement domain. We compare CAE with a Dictionary learning based coupling setup and a recently proposed CNN based CS reconstruction algorithm. We show reconstruction capability of CAE in terms of PSNR and SSIM on a standard set of images with measurement rates of 0.04 and 0.25.},\n  keywords = {compressed sensing;image reconstruction;learning (artificial intelligence);Compressed Sensing based techniques;CS based techniques;inversion process;real-time reconstruction process;deep learning - auto encoders;CNN based architectures;learning inversion;inductive setup;inductive learning;signal domain;measurement domain;CAE;Dictionary learning based coupling setup;CS reconstruction algorithm;reconstruction capability;measurement rates;Autoencoder based reconstruction;compressively sampled measurements;Coupled Autoencoder;CNN;dimensional random projections;inverse problem;reconstruction;inductive learning;transfer learning;autoencoders},\n  doi = {10.23919/EUSIPCO.2018.8553263},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437294.pdf},\n}\n\n
\n
\n\n\n
\n This work addresses the problem of reconstructing images from their lower dimensional random projections using Coupled Autoencoder (CAE). Traditionally, Compressed Sensing (CS) based techniques have been employed for this task. CS based techniques are iterative in nature; hence inversion process is time-consuming and cannot be deployed for the real-time reconstruction process. These inversion processes are transductive in nature. With the recent development in deep learning - auto encoders, CNN based architectures have been used for learning inversion in an inductive setup. The training period for inductive learning is large but is very fast during application. But these approaches work only on the signal domain and not on the measurement domain. We show the application of CAE, which can work directly from the measurement domain. We compare CAE with a Dictionary learning based coupling setup and a recently proposed CNN based CS reconstruction algorithm. We show reconstruction capability of CAE in terms of PSNR and SSIM on a standard set of images with measurement rates of 0.04 and 0.25.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Frequency Phase Retrieval from Noisy Data.\n \n \n \n \n\n\n \n Katkovnik, V.; and Egiazarian, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2200-2204, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-FrequencyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553264,\n  author = {V. Katkovnik and K. Egiazarian},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-Frequency Phase Retrieval from Noisy Data},\n  year = {2018},\n  pages = {2200-2204},\n  abstract = {The phase retrieval from multi-frequency intensity (power) observations is considered. The object to be reconstructed is complex-valued. A novel algorithm is presented that accomplishes both the object phase (absolute phase) retrieval and denoising for Poissonian and Gaussian measurements. The algorithm is derived from the maximum likelihood formulation with Block Matching 3D (BM3D) sparsity priors. These priors result in two filtering: one in the complex domain for complex-valued multi-frequency object images and another one in the real domain for the object absolute phase. The algorithm is iterative with alternating projections between the object and measurement variables. The simulation experiments are produced for Fourier transform image formation and random phase modulations of the object, then observations are random object diffraction patterns. The simulation results demonstrate the success of the algorithm for reconstruction of the complex phase objects with the high-accuracy performance even for a high dynamic range of the absolute phase and very noisy data.},\n  keywords = {Fourier transforms;image denoising;image matching;image reconstruction;image retrieval;iterative methods;maximum likelihood estimation;phase modulation;stereo image processing;object absolute phase;Fourier transform image formation;random phase modulations;random object diffraction patterns;complex phase objects;noisy data;multifrequency phase retrieval;multifrequency intensity observations;object phase retrieval;maximum likelihood formulation;Block Matching 3D sparsity priors;BM3D;complex domain;complex-valued multifrequency object images;Noise measurement;Signal processing algorithms;Image reconstruction;Maximum likelihood estimation;Minimization;Signal processing;Optimization},\n  doi = {10.23919/EUSIPCO.2018.8553264},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570430735.pdf},\n}\n\n
\n
\n\n\n
\n The phase retrieval from multi-frequency intensity (power) observations is considered. The object to be reconstructed is complex-valued. A novel algorithm is presented that accomplishes both the object phase (absolute phase) retrieval and denoising for Poissonian and Gaussian measurements. The algorithm is derived from the maximum likelihood formulation with Block Matching 3D (BM3D) sparsity priors. These priors result in two filtering: one in the complex domain for complex-valued multi-frequency object images and another one in the real domain for the object absolute phase. The algorithm is iterative with alternating projections between the object and measurement variables. The simulation experiments are produced for Fourier transform image formation and random phase modulations of the object, then observations are random object diffraction patterns. The simulation results demonstrate the success of the algorithm for reconstruction of the complex phase objects with the high-accuracy performance even for a high dynamic range of the absolute phase and very noisy data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Enhanced Time-Frequency Masking by Using Neural Networks for Monaural Source Separation in Reverberant Room Environments.\n \n \n \n \n\n\n \n Sun, Y.; Wang, W.; Chambers, J. A.; and Naqvi, S. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1647-1651, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EnhancedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553265,\n  author = {Y. Sun and W. Wang and J. A. Chambers and S. M. Naqvi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Enhanced Time-Frequency Masking by Using Neural Networks for Monaural Source Separation in Reverberant Room Environments},\n  year = {2018},\n  pages = {1647-1651},\n  abstract = {Deep neural networks (DNNs) have been used for dereverberation and denosing in the monaural source separation problem. However, the performance of current state-of-the-art methods is limited, particularly when applied in highly reverberant room environments. In this paper, we propose an enhanced time-frequency (T-F) mask to improve the separation performance. The ideal enhanced mask (IEM) consists of the dereverberation mask (DM) and the ideal ratio mask (IRM). The DM is specifically applied to eliminate the reverberations in the speech mixture and the IRM helps in denoising. The IEEE and the TIMIT corpora with real room impulse responses (RIRs) and noise from the NOISEX dataset are used to generate speech mixtures for evaluations. The proposed method outperforms the state-of-the-art methods specifically in highly reverberant and noisy room environments.},\n  keywords = {acoustic signal processing;blind source separation;neural nets;reverberation;signal denoising;source separation;speech processing;time-frequency analysis;enhanced time-frequency masking;deep neural networks;monaural source separation problem;current state-of-the-art methods;highly reverberant room environments;separation performance;ideal enhanced mask;dereverberation mask;DM;ideal ratio mask;IRM;reverberations;speech mixture;noisy room environments;DNN;Signal to noise ratio;Training;Testing;Europe;Source separation;Production facilities;source separation;reverberant room environments;dereverberation;time-frequency mask},\n  doi = {10.23919/EUSIPCO.2018.8553265},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432482.pdf},\n}\n\n
\n
\n\n\n
\n Deep neural networks (DNNs) have been used for dereverberation and denosing in the monaural source separation problem. However, the performance of current state-of-the-art methods is limited, particularly when applied in highly reverberant room environments. In this paper, we propose an enhanced time-frequency (T-F) mask to improve the separation performance. The ideal enhanced mask (IEM) consists of the dereverberation mask (DM) and the ideal ratio mask (IRM). The DM is specifically applied to eliminate the reverberations in the speech mixture and the IRM helps in denoising. The IEEE and the TIMIT corpora with real room impulse responses (RIRs) and noise from the NOISEX dataset are used to generate speech mixtures for evaluations. The proposed method outperforms the state-of-the-art methods specifically in highly reverberant and noisy room environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse Multiple Kernel Learning: Support Identification via Mirror Stratifiability.\n \n \n \n \n\n\n \n Garrigos, G.; Rosasco, L.; and Villa, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1077-1081, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553267,\n  author = {G. Garrigos and L. Rosasco and S. Villa},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse Multiple Kernel Learning: Support Identification via Mirror Stratifiability},\n  year = {2018},\n  pages = {1077-1081},\n  abstract = {In statistical machine learning, kernel methods allow to consider infinite dimensional feature spaces with a computational cost that only depends on the number of observations. This is usually done by solving an optimization problem depending on a data fit term and a suitable regularizer. In this paper we consider feature maps which are the concatenation of a fixed, possibly large, set of simpler feature maps. The penalty is a sparsity inducing one, promoting solutions depending only on a small subset of the features. The group lasso problem is a special case of this more general setting. We show that one of the most popular optimization algorithms to solve the regularized objective function, the forward-backward splitting method, allows to perform feature selection in a stable manner. In particular, we prove that the set of relevant features is identified by the algorithm after a finite number of iterations if a suitable qualification condition holds. Our analysis rely on the notions of stratification and mirror stratifiability.},\n  keywords = {iterative methods;learning (artificial intelligence);optimisation;regularized objective function;popular optimization algorithms;general setting;group lasso problem;sparsity inducing;simpler feature maps;suitable regularizer;optimization problem;computational cost;infinite dimensional feature spaces;kernel methods;statistical machine learning;support identification;sparse multiple kernel learning;mirror stratifiability;suitable qualification condition;feature selection;forward-backward splitting method;Kernel;Signal processing algorithms;Mercury (metals);Europe;Hilbert space;Signal processing;Mirrors;Multiple Kernel Learning;Feature Selection;Group Sparsity;Support recovery},\n  doi = {10.23919/EUSIPCO.2018.8553267},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438397.pdf},\n}\n\n
\n
\n\n\n
\n In statistical machine learning, kernel methods allow to consider infinite dimensional feature spaces with a computational cost that only depends on the number of observations. This is usually done by solving an optimization problem depending on a data fit term and a suitable regularizer. In this paper we consider feature maps which are the concatenation of a fixed, possibly large, set of simpler feature maps. The penalty is a sparsity inducing one, promoting solutions depending only on a small subset of the features. The group lasso problem is a special case of this more general setting. We show that one of the most popular optimization algorithms to solve the regularized objective function, the forward-backward splitting method, allows to perform feature selection in a stable manner. In particular, we prove that the set of relevant features is identified by the algorithm after a finite number of iterations if a suitable qualification condition holds. Our analysis rely on the notions of stratification and mirror stratifiability.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rethinking Compressive Sensing.\n \n \n \n \n\n\n \n Campobello, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1765-1769, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RethinkingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553268,\n  author = {G. Campobello},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Rethinking Compressive Sensing},\n  year = {2018},\n  pages = {1765-1769},\n  abstract = {In this paper we show that Compressive Sensing (CS) can be casted as an impulse response estimation problem. Using this interpretation we re-obtain some theoretical results of CS in a simple manner. Moreover, we prove that in the case of a randomly generated sensing matrix, reconstruction probability depends on the kurtosis of the distribution used for its generation.},\n  keywords = {compressed sensing;matrix algebra;probability;transient response;CS;simple manner;randomly generated sensing matrix;impulse response estimation problem;compressive sensing;Signal processing;Reconstruction algorithms;Linear systems;Estimation;Europe;Compressed sensing;Sparse matrices},\n  doi = {10.23919/EUSIPCO.2018.8553268},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438975.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we show that Compressive Sensing (CS) can be casted as an impulse response estimation problem. Using this interpretation we re-obtain some theoretical results of CS in a simple manner. Moreover, we prove that in the case of a randomly generated sensing matrix, reconstruction probability depends on the kurtosis of the distribution used for its generation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative Weighted Least Squares Frequency Estimation for Harmonic Sinusoidal Signal in Power Systems.\n \n \n \n \n\n\n \n Sun, J.; Aboutanios, E.; and Smith, D. B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 176-180, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553269,\n  author = {J. Sun and E. Aboutanios and D. B. Smith},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative Weighted Least Squares Frequency Estimation for Harmonic Sinusoidal Signal in Power Systems},\n  year = {2018},\n  pages = {176-180},\n  abstract = {In this paper, a two-level iterative weighted least squares (TIWLS) method is proposed to estimate the voltage frequency in a balanced three-phase (3PH) power system with harmonic distortion. The novel TIWLS estimator exploits the weighted least squares (WLS) technique to reuse the discarded information of the previous harmonic Aboutanios and Mulgrew (HAM) algorithm. Consequently, the TIWLS method can reduce the main estimation error in HAM estimator caused by the maximum bin search. The entire two-step estimation scheme has the same order complexity as the fast Fourier transform (FFT) algorithm, which is computationally efficient. Simulation results are presented to test the TIWLS algorithm, demonstrating that the new TIWLS estimator always outperforms HAM method with less oscillation.},\n  keywords = {discrete Fourier transforms;fast Fourier transforms;frequency estimation;iterative methods;least squares approximations;parameter estimation;signal processing;iterative weighted least squares frequency estimation;harmonic sinusoidal signal;power systems;weighted least squares method;voltage frequency;three-phase power system;3PH;harmonic distortion;weighted least squares technique;previous harmonic Aboutanios;Mulgrew algorithm;TIWLS method;main estimation error;HAM estimator;two-step estimation scheme;TIWLS algorithm;HAM method;Harmonic analysis;Frequency estimation;Power system harmonics;Signal processing algorithms;Estimation;Signal to noise ratio;Fundamental frequency estimation;harmonic distortion;weighted least squares;Fourier interpolation;three-phase power systems},\n  doi = {10.23919/EUSIPCO.2018.8553269},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434819.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, a two-level iterative weighted least squares (TIWLS) method is proposed to estimate the voltage frequency in a balanced three-phase (3PH) power system with harmonic distortion. The novel TIWLS estimator exploits the weighted least squares (WLS) technique to reuse the discarded information of the previous harmonic Aboutanios and Mulgrew (HAM) algorithm. Consequently, the TIWLS method can reduce the main estimation error in HAM estimator caused by the maximum bin search. The entire two-step estimation scheme has the same order complexity as the fast Fourier transform (FFT) algorithm, which is computationally efficient. Simulation results are presented to test the TIWLS algorithm, demonstrating that the new TIWLS estimator always outperforms HAM method with less oscillation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speaker Inconsistency Detection in Tampered Video.\n \n \n \n \n\n\n \n Korshunov, P.; and Marcel, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2375-2379, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpeakerPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553270,\n  author = {P. Korshunov and S. Marcel},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Speaker Inconsistency Detection in Tampered Video},\n  year = {2018},\n  pages = {2375-2379},\n  abstract = {With the increasing amount of video being consumed by people daily, there is a danger of the rise in maliciously modified video content (i.e., `fake news') that could be used to damage innocent people or to impose a certain agenda, e.g., meddle in elections. In this paper, we consider audio manipulations in video of a person speaking to the camera. Such manipulation is easy to perform, for instance, one can just replace a part of audio, while it can dramatically change the message and the meaning of the video. With the goal to develop an automated system that can detect these audio-visual speaker inconsistencies, we consider several approaches proposed for lip-syncing and dubbing detection, based on convolutional and recurrent networks and compare them with systems that are based on more traditional classifiers. We evaluated these methods on publicly available databases VidTIMIT, AMI, and GRID, for which we generated sets of tampered data.},\n  keywords = {audio-visual systems;speaker recognition;video signal processing;tampered data;recurrent networks;convolutional networks;dubbing detection;audio-visual speaker inconsistencies;automated system;audio manipulations;innocent people;fake news;maliciously modified video content;tampered video;speaker inconsistency detection;Databases;Feature extraction;Mouth;Support vector machines;Face;Visualization;Principal component analysis},\n  doi = {10.23919/EUSIPCO.2018.8553270},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439304.pdf},\n}\n\n
\n
\n\n\n
\n With the increasing amount of video being consumed by people daily, there is a danger of the rise in maliciously modified video content (i.e., `fake news') that could be used to damage innocent people or to impose a certain agenda, e.g., meddle in elections. In this paper, we consider audio manipulations in video of a person speaking to the camera. Such manipulation is easy to perform, for instance, one can just replace a part of audio, while it can dramatically change the message and the meaning of the video. With the goal to develop an automated system that can detect these audio-visual speaker inconsistencies, we consider several approaches proposed for lip-syncing and dubbing detection, based on convolutional and recurrent networks and compare them with systems that are based on more traditional classifiers. We evaluated these methods on publicly available databases VidTIMIT, AMI, and GRID, for which we generated sets of tampered data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Octonion Sparse Representation for Color and Multispectral Image Processing.\n \n \n \n \n\n\n \n Lazendić, S.; De Bie, H.; and Pižurica, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 608-612, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OctonionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553272,\n  author = {S. Lazendić and H. {De Bie} and A. Pižurica},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Octonion Sparse Representation for Color and Multispectral Image Processing},\n  year = {2018},\n  pages = {608-612},\n  abstract = {A recent trend in color image processing combines the quaternion algebra with dictionary learning methods. This paper aims to present a generalization of the quaternion dictionary learning method by using the octonion algebra. The octonion algebra combined with dictionary learning methods is well suited for representation of multispectral images with up to 7 color channels. Opposed to the classical dictionary learning techniques that treat multispectral images by concatenating spectral bands into a large monochrome image, we treat all the spectral bands simultaneously. Our approach leads to better preservation of color fidelity in true and false color images of the reconstructed multispectral image. To show the potential of the octonion based model, experiments are conducted for image reconstruction and denoising of color images as well as of extensively used Landsat 7 images.},\n  keywords = {image colour analysis;image denoising;image reconstruction;image representation;learning (artificial intelligence);image denoising;multispectral image representation;quaternion dictionary learning;Landsat 7 images;color channels;image reconstruction;octonion based model;reconstructed multispectral image;false color images;color fidelity;monochrome image;octonion algebra;quaternion algebra;color image processing;multispectral image processing;octonion sparse representation;Algebra;Image color analysis;Dictionaries;Machine learning;Matching pursuit algorithms;Quaternions;Color;Dictionary learning;Sparse representations;Oc-tonions;Multispectral imaging;Landsat 7},\n  doi = {10.23919/EUSIPCO.2018.8553272},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437862.pdf},\n}\n\n
\n
\n\n\n
\n A recent trend in color image processing combines the quaternion algebra with dictionary learning methods. This paper aims to present a generalization of the quaternion dictionary learning method by using the octonion algebra. The octonion algebra combined with dictionary learning methods is well suited for representation of multispectral images with up to 7 color channels. Opposed to the classical dictionary learning techniques that treat multispectral images by concatenating spectral bands into a large monochrome image, we treat all the spectral bands simultaneously. Our approach leads to better preservation of color fidelity in true and false color images of the reconstructed multispectral image. To show the potential of the octonion based model, experiments are conducted for image reconstruction and denoising of color images as well as of extensively used Landsat 7 images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Preconditioned Graph Diffusion LMS for Adaptive Graph Signal Processing.\n \n \n \n \n\n\n \n Hua, F.; Nassif, R.; Richard, C.; Wang, H.; and Sayed, A. H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 111-115, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n  \n \n 1 download\n \n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553273,\n  author = {F. Hua and R. Nassif and C. Richard and H. Wang and A. H. Sayed},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Preconditioned Graph Diffusion LMS for Adaptive Graph Signal Processing},\n  year = {2018},\n  pages = {111-115},\n  abstract = {Graph filters, defined as polynomial functions of a graph-shift operator (GSO), play a key role in signal processing over graphs. In this work, we are interested in the adaptive and distributed estimation of graph filter coefficients from streaming graph signals. To this end, diffusion LMS strategies can be employed. However, most popular GSOs such as those based on the graph Laplacian matrix or the adjacency matrix are not energy preserving. This may result in a large eigenvalue spread and a slow convergence of the graph diffusion LMS. To address this issue and improve the transient performance, we introduce a graph diffusion LMS-Newton algorithm. We also propose a computationally efficient preconditioned diffusion strategy and we study its performance.},\n  keywords = {eigenvalues and eigenfunctions;filtering theory;graph theory;least mean squares methods;matrix algebra;Newton method;signal processing;computationally efficient preconditioned diffusion strategy;preconditioned graph diffusion LMS;adaptive graph signal processing;graph filters;polynomial functions;graph-shift operator;adaptive distributed estimation;graph filter coefficients;graph signals;graph Laplacian matrix;adjacency matrix;graph diffusion LMS-Newton algorithm;Signal processing;Eigenvalues and eigenfunctions;Covariance matrices;Convergence;Europe;Laplace equations;Signal processing algorithms;Graph signal processing;graph filter;diffusion LMS;LMS-Newton;preconditioned LMS},\n  doi = {10.23919/EUSIPCO.2018.8553273},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437963.pdf},\n}\n\n
\n
\n\n\n
\n Graph filters, defined as polynomial functions of a graph-shift operator (GSO), play a key role in signal processing over graphs. In this work, we are interested in the adaptive and distributed estimation of graph filter coefficients from streaming graph signals. To this end, diffusion LMS strategies can be employed. However, most popular GSOs such as those based on the graph Laplacian matrix or the adjacency matrix are not energy preserving. This may result in a large eigenvalue spread and a slow convergence of the graph diffusion LMS. To address this issue and improve the transient performance, we introduce a graph diffusion LMS-Newton algorithm. We also propose a computationally efficient preconditioned diffusion strategy and we study its performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Parallel and Hybrid Soft-Thresholding Algorithms with Line Search for Sparse Nonlinear Regression.\n \n \n \n \n\n\n \n Yang, Y.; Pesavento, M.; Chatzinotas, S.; and Ottersten, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1587-1591, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ParallelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553274,\n  author = {Y. Yang and M. Pesavento and S. Chatzinotas and B. Ottersten},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Parallel and Hybrid Soft-Thresholding Algorithms with Line Search for Sparse Nonlinear Regression},\n  year = {2018},\n  pages = {1587-1591},\n  abstract = {In this paper, we propose a convergent iterative algorithm for nondifferentiable nonconvex nonlinear regression problems. The proposed parallel algorithm consists in optimizing a sequence of successively refined approximate functions. Compared with the popular iterative soft-thresholding algorithm commonly known as ISTA, which is the benchmark algorithm for such problems, it has two attractive features which lead to a notable reduction in the algorithm's complexity: the proposed approximate function does not have to be a global upper bound of the original function, and the stepsize can be efficiently computed by the line search scheme which is carried out over a properly constructed differentiable function. Furthermore, when the parallel algorithm cannot be fully parallelized due to memory/processor constraints, we propose a hybrid updating scheme that divides the whole set of variables into blocks which are updated sequentially. Since the stepsize is obtained by performing the line search along the coordinate of each block variable, the proposed hybrid algorithm converges faster than state-of-the-art hybrid algorithms based on constant stepsizes and/or decreasing stepsizes. Finally, the proposed algorithms are numerically tested.},\n  keywords = {approximation theory;computational complexity;convergence of numerical methods;iterative methods;optimisation;parallel algorithms;regression analysis;search problems;iterative soft-thresholding algorithm;differentiable function;hybrid algorithm;hybrid updating scheme;line search scheme;approximate function;parallel algorithm;nondifferentiable nonconvex nonlinear regression problems;convergent iterative algorithm;sparse nonlinear regression;soft-thresholding algorithms;Signal processing algorithms;Convergence;Approximation algorithms;Complexity theory;Search problems;Linear programming;Upper bound;Big Data;Block Coordinate Descent;Line Search;Linear Regression;Nonlinear Regression;Successive Convex Approximation},\n  doi = {10.23919/EUSIPCO.2018.8553274},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437101.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a convergent iterative algorithm for nondifferentiable nonconvex nonlinear regression problems. The proposed parallel algorithm consists in optimizing a sequence of successively refined approximate functions. Compared with the popular iterative soft-thresholding algorithm commonly known as ISTA, which is the benchmark algorithm for such problems, it has two attractive features which lead to a notable reduction in the algorithm's complexity: the proposed approximate function does not have to be a global upper bound of the original function, and the stepsize can be efficiently computed by the line search scheme which is carried out over a properly constructed differentiable function. Furthermore, when the parallel algorithm cannot be fully parallelized due to memory/processor constraints, we propose a hybrid updating scheme that divides the whole set of variables into blocks which are updated sequentially. Since the stepsize is obtained by performing the line search along the coordinate of each block variable, the proposed hybrid algorithm converges faster than state-of-the-art hybrid algorithms based on constant stepsizes and/or decreasing stepsizes. Finally, the proposed algorithms are numerically tested.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Design of Optimal Frequency-Selective FIR Filters Using a Memetic Algorithm.\n \n \n \n \n\n\n \n San-José-Revuelta, L. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1172-1176, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DesignPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553275,\n  author = {L. M. San-José-Revuelta},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Design of Optimal Frequency-Selective FIR Filters Using a Memetic Algorithm},\n  year = {2018},\n  pages = {1172-1176},\n  abstract = {Finite Impulse Response (FIR) digital filters are widely used due to their capabilities of low computational load, stability and linear phase. Most traditional design approaches can not control with enough accuracy the frequency response of the designed filter. For this reason, we propose the use of a memetic algorithm along with a weighted fitness function, to estimate optimized filter coefficients that best approximate ideal specifications. Results have been compared to both traditional methods (mainly windowing and the Parks-McClellan algorithm) as well as to several bio-inspired techniques. Numerical results show that proposed method achieves better fit to filter specifications, a larger attenuation in the stop band and a narrower transition band, at the expense of slightly increasing the passband ripple (0.5 - 0.7 dB), the later in about 68% of the cases.},\n  keywords = {band-pass filters;FIR filters;frequency response;optimal frequency-selective FIR filters;memetic algorithm;frequency response;weighted fitness function;Parks-McClellan algorithm;finite impulse response digital filters;optimized filter coefficients;bio-inspired techniques;passband ripple;Signal processing algorithms;Finite impulse response filters;Genetic algorithms;Frequency response;Memetics;Attenuation;Design methodology;FIR design;memetic algorithm;genetic algorithm;k-opt algorithm},\n  doi = {10.23919/EUSIPCO.2018.8553275},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438008.pdf},\n}\n\n
\n
\n\n\n
\n Finite Impulse Response (FIR) digital filters are widely used due to their capabilities of low computational load, stability and linear phase. Most traditional design approaches can not control with enough accuracy the frequency response of the designed filter. For this reason, we propose the use of a memetic algorithm along with a weighted fitness function, to estimate optimized filter coefficients that best approximate ideal specifications. Results have been compared to both traditional methods (mainly windowing and the Parks-McClellan algorithm) as well as to several bio-inspired techniques. Numerical results show that proposed method achieves better fit to filter specifications, a larger attenuation in the stop band and a narrower transition band, at the expense of slightly increasing the passband ripple (0.5 - 0.7 dB), the later in about 68% of the cases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimal Estimation with Extended Battery Life in Wireless Sensor Networks.\n \n \n \n \n\n\n \n Yang, L.; Zhu, H.; Wang, H.; Kang, K.; and Qian, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 662-666, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OptimalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553276,\n  author = {L. Yang and H. Zhu and H. Wang and K. Kang and H. Qian},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal Estimation with Extended Battery Life in Wireless Sensor Networks},\n  year = {2018},\n  pages = {662-666},\n  abstract = {Energy constraint is always a bottleneck in a distributed wireless sensor network (WSN). Online censoring is an effective approach to reduce the overall power consumption by only transmitting statistical informative data. However, individual sensor may still suffer from energy shortage due to frequent transmission of informative data or geographical long distance transmission. In this paper, we consider the parameters estimation problem in WSNs, where the goal is to minimize the estimation error while keeping the network lifetime long. A distributed censoring algorithm is developed, which allows sensor nodes to make autonomous decisions on whether to transmit the sampled data. We show that with the proposed algorithm, the network lifetime extends and approaches to its theoretical limit, and the performance loss in terms of the estimation error is minimal. Simulation results validate its effectiveness.},\n  keywords = {estimation theory;parameter estimation;statistical analysis;telecommunication network reliability;wireless sensor networks;distributed wireless sensor network;parameter estimation problem;statistical informative data transmission;optimal estimation error minimization;distributed censoring algorithm;network lifetime;geographical long distance transmission;energy shortage;statistical informative data;power consumption;WSN;extended battery life;Wireless sensor networks;Signal processing algorithms;Signal processing;Batteries;Estimation error;Energy consumption;Wireless sensor networks;censoring;network lifetime},\n  doi = {10.23919/EUSIPCO.2018.8553276},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434025.pdf},\n}\n\n
\n
\n\n\n
\n Energy constraint is always a bottleneck in a distributed wireless sensor network (WSN). Online censoring is an effective approach to reduce the overall power consumption by only transmitting statistical informative data. However, individual sensor may still suffer from energy shortage due to frequent transmission of informative data or geographical long distance transmission. In this paper, we consider the parameters estimation problem in WSNs, where the goal is to minimize the estimation error while keeping the network lifetime long. A distributed censoring algorithm is developed, which allows sensor nodes to make autonomous decisions on whether to transmit the sampled data. We show that with the proposed algorithm, the network lifetime extends and approaches to its theoretical limit, and the performance loss in terms of the estimation error is minimal. Simulation results validate its effectiveness.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Deep Convolutional Neural Network for Semantic Pixel-Wise Segmentation of Road and Pavement Surface Cracks.\n \n \n \n \n\n\n \n David Jenkins, M.; Carr, T. A.; Iglesias, M. I.; Buggy, T.; and Morison, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2120-2124, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553280,\n  author = {M. {David Jenkins} and T. A. Carr and M. I. Iglesias and T. Buggy and G. Morison},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Deep Convolutional Neural Network for Semantic Pixel-Wise Segmentation of Road and Pavement Surface Cracks},\n  year = {2018},\n  pages = {2120-2124},\n  abstract = {Deterioration of road and pavement surface conditions is an issue which directly affects the majority of the world today. The complex structure and textural similarities of surface cracks, as well as noise and image illumination variation makes automated detection a challenging task. In this paper, we propose a deep fully convolutional neural network to perform pixel-wise classification of surface cracks on road and pavement images. The network consists of an encoder layer which reduces the input image to a bank of lower level feature maps. This is followed by a corresponding decoder layer which maps the encoded features back to the resolution of the input data using the indices of the encoder pooling layers to perform efficient up-sampling. The network is finished with a classification layer to label individual pixels. Training time is minimal due to the small amount of training/validation data (80 training images and 20 validation images). This is important due to the lack of applicable public data available. Despite this lack of data, we are able to perform image segmentation (pixel-level classification) on a number of publicly available road crack datasets. The network was tested extensively and the results obtained indicate performance in direct competition with that of the current state-of-the-art methods.},\n  keywords = {feature extraction;feedforward neural nets;image classification;image coding;image segmentation;image texture;learning (artificial intelligence);object detection;roads;structural engineering computing;surface cracks;input image;lower level feature maps;corresponding decoder layer;encoded features;input data;encoder pooling layers;classification layer;individual pixels;training time;training/validation data;applicable public data;image segmentation;publicly available road crack datasets;deep convolutional neural network;pavement surface cracks;pavement surface conditions;complex structure;textural similarities;image illumination variation;deep fully convolutional neural network;pixel-wise classification;pavement images;encoder layer;validation images;training images;Image segmentation;Task analysis;Roads;Decoding;Training;Signal processing algorithms;Surface cracks},\n  doi = {10.23919/EUSIPCO.2018.8553280},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437180.pdf},\n}\n\n
\n
\n\n\n
\n Deterioration of road and pavement surface conditions is an issue which directly affects the majority of the world today. The complex structure and textural similarities of surface cracks, as well as noise and image illumination variation makes automated detection a challenging task. In this paper, we propose a deep fully convolutional neural network to perform pixel-wise classification of surface cracks on road and pavement images. The network consists of an encoder layer which reduces the input image to a bank of lower level feature maps. This is followed by a corresponding decoder layer which maps the encoded features back to the resolution of the input data using the indices of the encoder pooling layers to perform efficient up-sampling. The network is finished with a classification layer to label individual pixels. Training time is minimal due to the small amount of training/validation data (80 training images and 20 validation images). This is important due to the lack of applicable public data available. Despite this lack of data, we are able to perform image segmentation (pixel-level classification) on a number of publicly available road crack datasets. The network was tested extensively and the results obtained indicate performance in direct competition with that of the current state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Non-intrusive fingerprints extraction from hyperspectral imagery.\n \n \n \n \n\n\n \n Yan, L.; and Chen, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1432-1436, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Non-intrusivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553281,\n  author = {L. Yan and J. Chen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Non-intrusive fingerprints extraction from hyperspectral imagery},\n  year = {2018},\n  pages = {1432-1436},\n  abstract = {Fingerprint extraction plays an important role in criminal investigation and information security. Conventionally, latent fingerprints are not readily visible and imaging often requires to use intrusive manners. Hyperspectral imaging techniques provide a possibility to extract fingerprints in a non-intrusive manner, however it requires well-designed image analysis algorithms. In this paper, we consider the problem of fingerprint extraction from hyperspectral images and propose a processing scheme. The proposed scheme extracts image textures by local total variation (LTV) and uses Histogram of Oriented Gradient (HOG) information to fuse these channels. Experiment results with a real image show the ability of the proposed method for extracting fingerprints from complex backgrounds.},\n  keywords = {feature extraction;fingerprint identification;hyperspectral imaging;image texture;nonintrusive fingerprints extraction;hyperspectral imagery;fingerprint extraction;criminal investigation;information security;hyperspectral imaging techniques;image analysis algorithms;image textures;histogram of oriented gradient information;HOG information;complex backgrounds;Hyperspectral imaging;Data mining;Feature extraction;Dimensionality reduction;Imaging;Principal component analysis;Fingerprint extraction;hyperspectral images;local total variation;texture;histogram of oriented gradient},\n  doi = {10.23919/EUSIPCO.2018.8553281},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439039.pdf},\n}\n\n
\n
\n\n\n
\n Fingerprint extraction plays an important role in criminal investigation and information security. Conventionally, latent fingerprints are not readily visible and imaging often requires to use intrusive manners. Hyperspectral imaging techniques provide a possibility to extract fingerprints in a non-intrusive manner, however it requires well-designed image analysis algorithms. In this paper, we consider the problem of fingerprint extraction from hyperspectral images and propose a processing scheme. The proposed scheme extracts image textures by local total variation (LTV) and uses Histogram of Oriented Gradient (HOG) information to fuse these channels. Experiment results with a real image show the ability of the proposed method for extracting fingerprints from complex backgrounds.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Resource Allocation for QF VMIMO Receive Cooperation in Urban Traffic Hotspots.\n \n \n \n \n\n\n \n Rüegg, T.; and Wittneben, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1502-1506, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ResourcePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553283,\n  author = {T. Rüegg and A. Wittneben},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Resource Allocation for QF VMIMO Receive Cooperation in Urban Traffic Hotspots},\n  year = {2018},\n  pages = {1502-1506},\n  abstract = {User cooperation enabled traffic offloading has been shown to be a promising concept to serve a large number of mobile stations in the uplink of an urban traffic hotspot scenario. Multiple mobile stations form a virtual antenna array and jointly access the nearby WLAN access points, achieving large gains compared to non-cooperating schemes. In this paper we consider the corresponding downlink, where the non-cooperating WLAN access points individually transmit their signals to the mobile stations in the hotspot. In order to decode these signals, each mobile station then quantizes its received signal and broadcasts it to the other cooperating mobile stations. The channel access times allocated to the cooperating mobile stations in the broadcasting phase thereby strongly impact the quantization rates and eventually the system performance. Hence, these resources have to be assigned carefully, leading to a non-convex optimization problem. In this context, we propose an efficient resource allocation scheme which yields promising results at low computational complexity. Applying this scheme we then evaluate the downlink of user cooperation enable traffic offloading in an urban traffic hotspot scenario.},\n  keywords = {antenna arrays;cellular radio;computational complexity;concave programming;cooperative communication;MIMO communication;mobile radio;optimisation;resource allocation;telecommunication traffic;wireless channels;wireless LAN;urban traffic hotspots;user cooperation;traffic offloading;mobile station;urban traffic hotspot scenario;multiple mobile stations;virtual antenna array;nearby WLAN access points;received signal;cooperating mobile stations;channel access times;efficient resource allocation scheme;QF VMIMO receive cooperation;Resource management;Quantization (signal);Broadcasting;Downlink;Throughput;Relays;Decoding},\n  doi = {10.23919/EUSIPCO.2018.8553283},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436031.pdf},\n}\n\n
\n
\n\n\n
\n User cooperation enabled traffic offloading has been shown to be a promising concept to serve a large number of mobile stations in the uplink of an urban traffic hotspot scenario. Multiple mobile stations form a virtual antenna array and jointly access the nearby WLAN access points, achieving large gains compared to non-cooperating schemes. In this paper we consider the corresponding downlink, where the non-cooperating WLAN access points individually transmit their signals to the mobile stations in the hotspot. In order to decode these signals, each mobile station then quantizes its received signal and broadcasts it to the other cooperating mobile stations. The channel access times allocated to the cooperating mobile stations in the broadcasting phase thereby strongly impact the quantization rates and eventually the system performance. Hence, these resources have to be assigned carefully, leading to a non-convex optimization problem. In this context, we propose an efficient resource allocation scheme which yields promising results at low computational complexity. Applying this scheme we then evaluate the downlink of user cooperation enable traffic offloading in an urban traffic hotspot scenario.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n User Interaction in Mobile Biometrics.\n \n \n \n \n\n\n \n Corsetti, B.; Blanco-Gonzalo, R.; and Sanchez-Reillo, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 543-547, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"UserPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553284,\n  author = {B. Corsetti and R. Blanco-Gonzalo and R. Sanchez-Reillo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {User Interaction in Mobile Biometrics},\n  year = {2018},\n  pages = {543-547},\n  abstract = {Current trends in smartphone authentication have brought new kinds of user interactions which may affect biometric recognition performance severely. This paper brings a snapshot of the current state of the art and validates the recent ISO/IEC 21472 user-biometric system interaction evaluation methodology. Our goal in this work is to evaluate the accessibility of an entrance control system by means of biometric recognition. By studying how the users interact with a system (especially developed for people with accessibility concerns), the final purpose is to derive improvements to future mobile applications in terms of accessibility and universality.},\n  keywords = {biometrics (access control);IEC standards;ISO standards;mobile computing;smart phones;mobile biometrics;smartphone authentication;biometric recognition performance;entrance control system;ISO/IEC 21472 user-biometric system interaction evaluation methodology;Fingerprint recognition;Performance evaluation;Authentication;Usability;Mobile handsets;Europe;Biometrics;Mobile Biometrics;Face recognition;Fingerprint recognition;User interaction;Accessibility},\n  doi = {10.23919/EUSIPCO.2018.8553284},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437371.pdf},\n}\n\n
\n
\n\n\n
\n Current trends in smartphone authentication have brought new kinds of user interactions which may affect biometric recognition performance severely. This paper brings a snapshot of the current state of the art and validates the recent ISO/IEC 21472 user-biometric system interaction evaluation methodology. Our goal in this work is to evaluate the accessibility of an entrance control system by means of biometric recognition. By studying how the users interact with a system (especially developed for people with accessibility concerns), the final purpose is to derive improvements to future mobile applications in terms of accessibility and universality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Detection and Estimation of Change-Points in a Time Series of Multivariate Images.\n \n \n \n \n\n\n \n Mian, A.; Ovarlez, J.; Ginolhac, G.; and Atto, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1097-1101, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553285,\n  author = {A. Mian and J. Ovarlez and G. Ginolhac and A. Atto},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust Detection and Estimation of Change-Points in a Time Series of Multivariate Images},\n  year = {2018},\n  pages = {1097-1101},\n  abstract = {In this paper, we study the problem of detecting and estimating change-points in a time series of multivariate images. We extend existent works to take into account the heterogeneity of the dataset on a spatial neighborhood. The classic complex Gaussian assumption of the data is replaced by a complex elliptically symmetric assumption. Then robust statistics are derived using Generalized Likelihood Ratio Test (GLRT). These statistics are coupled to an estimation strategy for one or several changes. Performance of these robust statistics have been analyzed in simulation and compared to the one associated with standard multivariate normal assumption. When the data is heterogeneous, the detection and estimation strategy yields better results with the new statistics.},\n  keywords = {covariance matrices;Gaussian distribution;image processing;maximum likelihood estimation;random processes;time series;robust detection;change-points;time series;multivariate images;heterogeneity;spatial neighborhood;classic complex Gaussian assumption;complex elliptically symmetric assumption;robust statistics;Generalized Likelihood Ratio Test;standard multivariate normal assumption;Estimation;Time series analysis;Synthetic aperture radar;Europe;Signal processing;Covariance matrices;Signal processing algorithms;Image Time Series;Robust Change Detection;Multivariate Images;Complex Elliptically Symmetric},\n  doi = {10.23919/EUSIPCO.2018.8553285},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437328.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we study the problem of detecting and estimating change-points in a time series of multivariate images. We extend existent works to take into account the heterogeneity of the dataset on a spatial neighborhood. The classic complex Gaussian assumption of the data is replaced by a complex elliptically symmetric assumption. Then robust statistics are derived using Generalized Likelihood Ratio Test (GLRT). These statistics are coupled to an estimation strategy for one or several changes. Performance of these robust statistics have been analyzed in simulation and compared to the one associated with standard multivariate normal assumption. When the data is heterogeneous, the detection and estimation strategy yields better results with the new statistics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Convex-Combined Step-Size-Based Normalized Modified Filtered-x Least Mean Square Algorithm for Impulsive Active Noise Control Systems.\n \n \n \n \n\n\n \n Akhtar, M. T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2454-2458, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553286,\n  author = {M. T. Akhtar},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Convex-Combined Step-Size-Based Normalized Modified Filtered-x Least Mean Square Algorithm for Impulsive Active Noise Control Systems},\n  year = {2018},\n  pages = {2454-2458},\n  abstract = {The celebrated filtered-x least mean square (FxLMS) algorithm does not work well for active noise control (ANC) of impulsive source. In previous attempts, the robustness of FxLMS algorithm has been improved by thresholding the reference and/or error signals used in the ANC system. However, estimating these thresholds is not any easy task in most of the practical scenarios. The need for appropriate thresholds is avoided in a previously proposed improved normalized-step-size FxLMS (INSS-FxLMS) algorithm, however, there is a tradeoff situation between the convergence speed and steady-state performance as a fixed step-size needs to be selected properly. In this paper, we propose a novel algorithm for impulsive ANC (IANC) systems. The proposed algorithm is based on the previously proposed INSS-FxLMS. The main idea to employ a convex-combined step-size which automatically converges to a large value to improve the convergence speed during the transient state, and to a small value as the IANC system converges at the steady-state. Extensive simulation results are presented to demonstrate the effective performance of the proposed algorithm.},\n  keywords = {active noise control;adaptive filters;least mean squares methods;impulsive ANC systems;fixed step-size;convergence speed;normalized-step-size FxLMS algorithm;impulsive source;impulsive active noise control systems;convex-combined step-size-based normalized modified filtered-x least mean square algorithm;Signal processing algorithms;Convergence;Robustness;Steady-state;Europe;Signal processing;Control systems;adaptive algorithm;active noise control;impulsive noise;convex combination;variable step-size},\n  doi = {10.23919/EUSIPCO.2018.8553286},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435626.pdf},\n}\n\n
\n
\n\n\n
\n The celebrated filtered-x least mean square (FxLMS) algorithm does not work well for active noise control (ANC) of impulsive source. In previous attempts, the robustness of FxLMS algorithm has been improved by thresholding the reference and/or error signals used in the ANC system. However, estimating these thresholds is not any easy task in most of the practical scenarios. The need for appropriate thresholds is avoided in a previously proposed improved normalized-step-size FxLMS (INSS-FxLMS) algorithm, however, there is a tradeoff situation between the convergence speed and steady-state performance as a fixed step-size needs to be selected properly. In this paper, we propose a novel algorithm for impulsive ANC (IANC) systems. The proposed algorithm is based on the previously proposed INSS-FxLMS. The main idea to employ a convex-combined step-size which automatically converges to a large value to improve the convergence speed during the transient state, and to a small value as the IANC system converges at the steady-state. Extensive simulation results are presented to demonstrate the effective performance of the proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Localization and Clock Offset Estimation via Time-Of-Arrival with Ranging Offset.\n \n \n \n \n\n\n \n Nevat, I.; Septier, F.; Avnit, K.; Peters, G. W.; and Clavier, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 672-676, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553287,\n  author = {I. Nevat and F. Septier and K. Avnit and G. W. Peters and L. Clavier},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Localization and Clock Offset Estimation via Time-Of-Arrival with Ranging Offset},\n  year = {2018},\n  pages = {672-676},\n  abstract = {We develop a novel algorithm for Geo-Spatial location estimation for Internet of Things (IoT) networks by utilizing a One-Way Time-of-Arrival (OW-TOA) technology. Although very popular, OW-TOA based localization techniques are negatively affected by three phenomena: i) wireless connectivity between the target and the receiving nodes is not guaranteed (audibility), resulting in likelihood surface which may produce a non-unique maxima; ii) clock offset imperfection which is a result of a fixed deviation from a reference clock; and iii) a ranging offset which introduces distance dependent bias to the OW-TOA measurements. We develop a new statistical framework which incorporates these aspects and then derive the joint localization and clock offset Maximum Likelihood Estimator to jointly estimate the location of the target and the clock offset. To solve the resulting non-convex optimization problem we propose to use the Cross-Entropy method.},\n  keywords = {convex programming;entropy;Internet of Things;location based services;maximum likelihood estimation;time-of-arrival estimation;nonconvex optimization problem;Internet of Things networks;clock offset estimation;clock offset Maximum Likelihood Estimator;wireless connectivity;OW-TOA based localization techniques;One-Way Time-of-Arrival technology;cross-entropy method;Geo-Spatial location estimation;Time-Of-Arrival;joint localization;OW-TOA measurements;distance dependent bias;ranging offset;reference clock;nonunique maxima;likelihood surface;receiving nodes;Distance measurement;Clocks;Maximum likelihood estimation;Optimization;Internet of Things;Stochastic processes},\n  doi = {10.23919/EUSIPCO.2018.8553287},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436296.pdf},\n}\n\n
\n
\n\n\n
\n We develop a novel algorithm for Geo-Spatial location estimation for Internet of Things (IoT) networks by utilizing a One-Way Time-of-Arrival (OW-TOA) technology. Although very popular, OW-TOA based localization techniques are negatively affected by three phenomena: i) wireless connectivity between the target and the receiving nodes is not guaranteed (audibility), resulting in likelihood surface which may produce a non-unique maxima; ii) clock offset imperfection which is a result of a fixed deviation from a reference clock; and iii) a ranging offset which introduces distance dependent bias to the OW-TOA measurements. We develop a new statistical framework which incorporates these aspects and then derive the joint localization and clock offset Maximum Likelihood Estimator to jointly estimate the location of the target and the clock offset. To solve the resulting non-convex optimization problem we propose to use the Cross-Entropy method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Optimization of Caching and Transport in Proactive Edge Cloud.\n \n \n \n \n\n\n \n Sardellitti, S.; Costanzo, F.; and Merluzzi, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 797-801, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553288,\n  author = {S. Sardellitti and F. Costanzo and M. Merluzzi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Optimization of Caching and Transport in Proactive Edge Cloud},\n  year = {2018},\n  pages = {797-801},\n  abstract = {Our goal in this paper is to devise a strategy for finding the optimal trade-off between the transport and caching energy costs associated to the delivery of contents in information networks. The proposed strategy is proactive with respect to the users' requests, as contents are pre-fetched depending on the distribution of their (estimated) popularity. In particular, we propose a k-center dominating set strategy to find the optimal clustering and then locate the best places to store/replicate the most popular contents. Then we develop a dynamic energy-efficient, strategy that jointly optimizes caching and delivery costs within each cluster. Although the formulated problem is a binary problem, we will show as it can be solved for moderate size networks by using efficient solvers. The performance gain reached through the proposed proactive strategy are then assessed by numerical results.},\n  keywords = {cache storage;cloud computing;optimisation;joint optimization;proactive edge cloud;caching energy costs;information networks;k-center dominating;optimal clustering;dynamic energy-efficient;jointly optimizes caching;delivery costs;moderate size networks;proactive strategy;Optimization;Routing;Energy consumption;Europe;Signal processing;Delays;Indexes;Proactive content delivery;in-network caching;energy efficiency;information centric networking},\n  doi = {10.23919/EUSIPCO.2018.8553288},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437836.pdf},\n}\n\n
\n
\n\n\n
\n Our goal in this paper is to devise a strategy for finding the optimal trade-off between the transport and caching energy costs associated to the delivery of contents in information networks. The proposed strategy is proactive with respect to the users' requests, as contents are pre-fetched depending on the distribution of their (estimated) popularity. In particular, we propose a k-center dominating set strategy to find the optimal clustering and then locate the best places to store/replicate the most popular contents. Then we develop a dynamic energy-efficient, strategy that jointly optimizes caching and delivery costs within each cluster. Although the formulated problem is a binary problem, we will show as it can be solved for moderate size networks by using efficient solvers. The performance gain reached through the proposed proactive strategy are then assessed by numerical results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sequential Polynomial QR Decomposition and Decoding of Frequency Selective MIMO Channels.\n \n \n \n \n\n\n \n Hassan, D.; Redif, S.; Lambotharan, S.; and Proudler, I. K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 460-464, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SequentialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553289,\n  author = {D. Hassan and S. Redif and S. Lambotharan and I. K. Proudler},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sequential Polynomial QR Decomposition and Decoding of Frequency Selective MIMO Channels},\n  year = {2018},\n  pages = {460-464},\n  abstract = {Recently there has been a growing interest in the application of polynomial matrix decomposition methods to the problem of efficient decoding of multiple-input multiple-output (MIMO) communication channels. Essentially, this type of approach decouples the frequency selective MIMO channel into a number of independent, frequency selective, single-input single-output (SISO) sub-channels by way of a polynomial singular-value decomposition (PSVD) or polynomial QR decomposition (PQRD) with successive interference cancellation. In this paper, we investigate a new PQRD algorithm, namely SM-PQRD, which is based on the concept of recently developed sequential matrix diagonalization (SMD). We also propose a new variant of SM-PQRD, namely MESM-PQRD to minimize computational complexity. Simulation results show that the new PQRD converges faster than state of the art algorithms. The applicability of the proposed algorithm is demonstrated for a frequency selective MIMO channel equalization problem.},\n  keywords = {channel estimation;computational complexity;equalisers;interference suppression;matrix decomposition;MIMO communication;polynomial matrices;polynomials;decoding;sequential matrix diagonalization;sequential matrix diagonalization;polynomial singular-value decomposition;frequency selective MIMO channel equalization problem;MESM-PQRD;SM-PQRD;output communication channels;polynomial matrix decomposition methods;sequential polynomial QR decomposition;MIMO communication;Signal processing algorithms;Matrix decomposition;Covariance matrices;Europe;Receivers;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553289},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436314.pdf},\n}\n\n
\n
\n\n\n
\n Recently there has been a growing interest in the application of polynomial matrix decomposition methods to the problem of efficient decoding of multiple-input multiple-output (MIMO) communication channels. Essentially, this type of approach decouples the frequency selective MIMO channel into a number of independent, frequency selective, single-input single-output (SISO) sub-channels by way of a polynomial singular-value decomposition (PSVD) or polynomial QR decomposition (PQRD) with successive interference cancellation. In this paper, we investigate a new PQRD algorithm, namely SM-PQRD, which is based on the concept of recently developed sequential matrix diagonalization (SMD). We also propose a new variant of SM-PQRD, namely MESM-PQRD to minimize computational complexity. Simulation results show that the new PQRD converges faster than state of the art algorithms. The applicability of the proposed algorithm is demonstrated for a frequency selective MIMO channel equalization problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressive Sensing of Temporally Correlated Sources Using Isotropic Multivariate Stable Laws.\n \n \n \n \n\n\n \n Tzagkarakis, G.; Nolan, J. P.; and Tsakalides, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1710-1714, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CompressivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553290,\n  author = {G. Tzagkarakis and J. P. Nolan and P. Tsakalides},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressive Sensing of Temporally Correlated Sources Using Isotropic Multivariate Stable Laws},\n  year = {2018},\n  pages = {1710-1714},\n  abstract = {This paper addresses the problem of compressively sensing a set of temporally correlated sources, in order to achieve faithful sparse signal reconstruction from noisy multiple measurement vectors (MMV). To this end, a simple sensing mechanism is proposed, which does not require the restricted isometry property (RIP) to hold near the sparsity level, whilst it provides additional degrees of freedom to better capture and suppress the inherent sampling noise effects. In particular, a reduced set of MMVs is generated by projecting the source signals onto random vectors drawn from isotropic multivariate stable laws. Then, the correlated sparse signals are recovered from the random MMVs by means of a recently introduced sparse Bayesian learning algorithm. Experimental evaluations on synthetic data with varying number of sources, correlation values, and noise strengths, reveal the superiority of our proposed sensing mechanism, when compared against well-established RIP-based compressive sensing schemes.},\n  keywords = {Bayes methods;compressed sensing;learning (artificial intelligence);signal reconstruction;signal sampling;vectors;isotropic multivariate stable laws;correlated sparse signals;MMV;sparse Bayesian learning algorithm;compressive sensing;sparse signal reconstruction;multiple measurement vectors;Sensors;Noise measurement;Correlation;Sparse matrices;Europe;Dispersion;Compressive sensing;correlated sources;isotropic multivariate stable laws;sparse Bayesian learning},\n  doi = {10.23919/EUSIPCO.2018.8553290},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436685.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of compressively sensing a set of temporally correlated sources, in order to achieve faithful sparse signal reconstruction from noisy multiple measurement vectors (MMV). To this end, a simple sensing mechanism is proposed, which does not require the restricted isometry property (RIP) to hold near the sparsity level, whilst it provides additional degrees of freedom to better capture and suppress the inherent sampling noise effects. In particular, a reduced set of MMVs is generated by projecting the source signals onto random vectors drawn from isotropic multivariate stable laws. Then, the correlated sparse signals are recovered from the random MMVs by means of a recently introduced sparse Bayesian learning algorithm. Experimental evaluations on synthetic data with varying number of sources, correlation values, and noise strengths, reveal the superiority of our proposed sensing mechanism, when compared against well-established RIP-based compressive sensing schemes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Speech Enhancement Using Kalman Filtering in the Logarithmic Bark Power Spectral Domain.\n \n \n \n \n\n\n \n Dionelis, N.; and Brookes, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1642-1646, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpeechPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553291,\n  author = {N. Dionelis and M. Brookes},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Speech Enhancement Using Kalman Filtering in the Logarithmic Bark Power Spectral Domain},\n  year = {2018},\n  pages = {1642-1646},\n  abstract = {We present a phase-sensitive speech enhancement algorithm based on a Kalman filter estimator that tracks speech and noise in the logarithmic Bark power spectral domain. With modulation-domain Kalman filtering, the algorithm tracks the speech spectral log-power using perceptually-motivated Bark bands. By combining STFT bins into Bark bands, the number of frequency components is reduced. The Kalman filter prediction step separately models the inter-frame relations of the speech and noise spectral log-powers and the Kalman filter update step models the nonlinear relations between the speech and noise spectral log-powers using the phase factor in Bark bands, which follows a sub-Gaussian distribution. The posterior mean of the speech spectral log-power is used to create an enhanced speech spectrum for signal reconstruction. The algorithm is evaluated in terms of speech quality and computational complexity with different algorithm configurations compared on various noise types. The algorithm implemented in Bark bands is compared to algorithms implemented in STFT bins and experimental results show that tracking speech in the log Bark power spectral domain, taking into account the temporal dynamics of each subband envelope, is beneficial. Regarding the computational complexity, the percentage decrease in the real-time factor is 44% when using Bark bands compared to when using STFT bins.},\n  keywords = {Gaussian distribution;Kalman filters;signal reconstruction;speech enhancement;speech recognition;log Bark power spectral domain;STFT bins;speech enhancement;logarithmic Bark power spectral domain;phase-sensitive speech;Kalman filter estimator;modulation-domain Kalman filtering;speech spectral log-power;perceptually-motivated Bark bands;Kalman filter prediction step;noise spectral log-powers;Kalman filter update step models;enhanced speech spectrum;speech quality;algorithm configurations;Spectral analysis;Signal processing algorithms;Kalman filters;Noise measurement;Speech enhancement;Heuristic algorithms;Indexes;Speech enhancement;phase-sensitive observation model;phase factor;Bark bands;Kalman filter},\n  doi = {10.23919/EUSIPCO.2018.8553291},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570430138.pdf},\n}\n\n
\n
\n\n\n
\n We present a phase-sensitive speech enhancement algorithm based on a Kalman filter estimator that tracks speech and noise in the logarithmic Bark power spectral domain. With modulation-domain Kalman filtering, the algorithm tracks the speech spectral log-power using perceptually-motivated Bark bands. By combining STFT bins into Bark bands, the number of frequency components is reduced. The Kalman filter prediction step separately models the inter-frame relations of the speech and noise spectral log-powers and the Kalman filter update step models the nonlinear relations between the speech and noise spectral log-powers using the phase factor in Bark bands, which follows a sub-Gaussian distribution. The posterior mean of the speech spectral log-power is used to create an enhanced speech spectrum for signal reconstruction. The algorithm is evaluated in terms of speech quality and computational complexity with different algorithm configurations compared on various noise types. The algorithm implemented in Bark bands is compared to algorithms implemented in STFT bins and experimental results show that tracking speech in the log Bark power spectral domain, taking into account the temporal dynamics of each subband envelope, is beneficial. Regarding the computational complexity, the percentage decrease in the real-time factor is 44% when using Bark bands compared to when using STFT bins.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Asymmetric Link-Based Binary Regression Model to Detect Parkinson's Disease by Using Replicated Voice Recordings.\n \n \n \n \n\n\n \n Lizbeth, N.; Pérez, C. J.; Martín, J.; and Calle-Alonso, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1182-1186, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553292,\n  author = {N. Lizbeth and C. J. Pérez and J. Martín and F. Calle-Alonso},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Asymmetric Link-Based Binary Regression Model to Detect Parkinson's Disease by Using Replicated Voice Recordings},\n  year = {2018},\n  pages = {1182-1186},\n  abstract = {Addressing dependent data as independent has become usual for Parkinson's Disease (PD) detection by using features extracted from replicated voice recordings. A binary regression model with an Asymmetric Student t (AST) distribution as link function has been developed in a classification context by taking into account the within-subject dependence. This opens the possibility of handling situations in which the probabilities of the binary response approach 0 and 1 at different rates. The computational issue has been addressed by proposing and using a representation based on a mixture of normal distributions for the AST distribution. This allows to include latent variables to derive a Gibbs sampling algorithm that is used to generate samples from the posterior distribution. The applicability of the proposed approach has been tested with a simulation-based experiment and has been applied to a real dataset for PD detection.},\n  keywords = {Bayes methods;diseases;feature extraction;Markov processes;medical signal processing;normal distribution;regression analysis;sampling methods;speech processing;asymmetric link-based binary regression model;replicated voice recordings;dependent data;binary response approach;normal distributions;AST distribution;posterior distribution;simulation-based experiment;PD detection;Parkinsons disease;features extracted;asymmetric student t distribution;probabilities;Gibbs sampling algorithm;Feature extraction;Gaussian distribution;Bayes methods;Europe;Signal processing;Parkinson's disease;Data models;Asymmetric Student t;Bayesian binary regression;Gibbs sampling;Parkinson's disease;Voice features},\n  doi = {10.23919/EUSIPCO.2018.8553292},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435555.pdf},\n}\n\n
\n
\n\n\n
\n Addressing dependent data as independent has become usual for Parkinson's Disease (PD) detection by using features extracted from replicated voice recordings. A binary regression model with an Asymmetric Student t (AST) distribution as link function has been developed in a classification context by taking into account the within-subject dependence. This opens the possibility of handling situations in which the probabilities of the binary response approach 0 and 1 at different rates. The computational issue has been addressed by proposing and using a representation based on a mixture of normal distributions for the AST distribution. This allows to include latent variables to derive a Gibbs sampling algorithm that is used to generate samples from the posterior distribution. The applicability of the proposed approach has been tested with a simulation-based experiment and has been applied to a real dataset for PD detection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Denoising Phonocardiogram signals with Non-negative Matrix Factorization informed by synchronous Electrocardiogram.\n \n \n \n \n\n\n \n Dia, N.; Fontecave-Jallon, J.; Gumery, P.; and Rivet, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 51-55, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DenoisingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553294,\n  author = {N. Dia and J. Fontecave-Jallon and P. Gumery and B. Rivet},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Denoising Phonocardiogram signals with Non-negative Matrix Factorization informed by synchronous Electrocardiogram},\n  year = {2018},\n  pages = {51-55},\n  abstract = {The phonocardiographic signals (PCG) are of interest for the analysis of the cardiac mechanical function. However, they are not always directly exploitable because of ambient interference (gastric noises, breathing noises, etc.). We aim to denoise PCG signals using another cardiac modality, the electrocardiographic (ECG) signal. In this paper, we investigate an informed non-negative matrix factorization to extract signal components out of the noisy PCG signal, considering synchronous ECG information. Our approach is applied and evaluated on a database consisting of real and artificially noisy PCG signals.},\n  keywords = {electrocardiography;matrix decomposition;medical signal processing;phonocardiography;pneumodynamics;breathing noises;phonocardiogram signal denoising;artificially noisy PCG signals;synchronous ECG information;signal components;electrocardiographic signal;cardiac modality;gastric noises;ambient interference;cardiac mechanical function;synchronous electrocardiogram;nonnegative matrix factorization informed;Phonocardiography;Electrocardiography;Noise measurement;Noise reduction;Delays;Signal processing algorithms;Spectrogram},\n  doi = {10.23919/EUSIPCO.2018.8553294},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437663.pdf},\n}\n\n
\n
\n\n\n
\n The phonocardiographic signals (PCG) are of interest for the analysis of the cardiac mechanical function. However, they are not always directly exploitable because of ambient interference (gastric noises, breathing noises, etc.). We aim to denoise PCG signals using another cardiac modality, the electrocardiographic (ECG) signal. In this paper, we investigate an informed non-negative matrix factorization to extract signal components out of the noisy PCG signal, considering synchronous ECG information. Our approach is applied and evaluated on a database consisting of real and artificially noisy PCG signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Consistent Spectral Methods for Dimensionality Reduction.\n \n \n \n \n\n\n \n Kharouf, M.; Rebafka, T.; and Sokolovska, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 286-290, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ConsistentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553295,\n  author = {M. Kharouf and T. Rebafka and N. Sokolovska},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Consistent Spectral Methods for Dimensionality Reduction},\n  year = {2018},\n  pages = {286-290},\n  abstract = {This paper addresses the problem of dimension reduction of noisy data, more precisely the challenge to determine the dimension of the subspace where the observed signal lives in. Based on results from random matrix theory, two novel estimators of the signal dimension are proposed in this paper. Consistency of the estimators is proved in the modern asymptotic regime, where the number of parameters grows proportionally with the sample size. Experimental results show that the novel estimators are robust to noise and, moreover, they give highly accurate results in settings where standard methods fail. We apply the novel dimension estimators to several life sciences benchmarks in the context of classification, and illustrate the improvements achieved by the new methods compared to the state-of-the-art approaches.},\n  keywords = {matrix algebra;random processes;signal classification;classification;signal dimension estimation;noisy data dimension reduction;subspace dimension;spectral methods;life sciences benchmarks;dimension estimators;random matrix theory;dimensionality reduction;Eigenvalues and eigenfunctions;Estimation;Covariance matrices;Limiting;Europe;Dimensionality reduction},\n  doi = {10.23919/EUSIPCO.2018.8553295},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437113.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of dimension reduction of noisy data, more precisely the challenge to determine the dimension of the subspace where the observed signal lives in. Based on results from random matrix theory, two novel estimators of the signal dimension are proposed in this paper. Consistency of the estimators is proved in the modern asymptotic regime, where the number of parameters grows proportionally with the sample size. Experimental results show that the novel estimators are robust to noise and, moreover, they give highly accurate results in settings where standard methods fail. We apply the novel dimension estimators to several life sciences benchmarks in the context of classification, and illustrate the improvements achieved by the new methods compared to the state-of-the-art approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Subspace-Orbit Randomized - Based Decomposition for Low-Rank Matrix Approximations.\n \n \n \n\n\n \n Kaloorazi, M. F.; and de Lamare , R. C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2618-2622, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553296,\n  author = {M. F. Kaloorazi and R. C. {de Lamare}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Subspace-Orbit Randomized - Based Decomposition for Low-Rank Matrix Approximations},\n  year = {2018},\n  pages = {2618-2622},\n  abstract = {In this paper we introduce a novel matrix decomposition algorithm termed Subspace-Orbit Randomized Singular Value Decomposition (SOR-SVD). It is computed by using random sampling techniques to give a low-rank approximation to an input matrix. Given a large and dense data matrix of size m x n, SOR-SVD requires a few passes through data to compute a rank-k approximation in O (mnk) floating-point operations. Furthermore, SOR-SVD can utilize advanced computer architectures and, as a result, it can be optimized for maximum efficiency. The SOR-SVD algorithm is simple, accurate, and provably correct, and outperforms previously reported techniques in terms of accuracy and efficiency.},\n  keywords = {approximation theory;computational complexity;floating point arithmetic;matrix decomposition;random processes;sampling methods;singular value decomposition;low-rank matrix approximations;random sampling techniques;low-rank approximation;input matrix;large data matrix;dense data matrix;rank-k approximation;advanced computer architectures;SOR-SVD algorithm;matrix decomposition algorithm;floating-point operations;subspace-orbit randomized singular value decomposition;Matrix decomposition;Approximation algorithms;Signal processing algorithms;Signal processing;Standards;Europe;Singular value decomposition;Matrix decomposition;low-rank approximation;randomized algorithms;numerical linear algebra},\n  doi = {10.23919/EUSIPCO.2018.8553296},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this paper we introduce a novel matrix decomposition algorithm termed Subspace-Orbit Randomized Singular Value Decomposition (SOR-SVD). It is computed by using random sampling techniques to give a low-rank approximation to an input matrix. Given a large and dense data matrix of size m x n, SOR-SVD requires a few passes through data to compute a rank-k approximation in O (mnk) floating-point operations. Furthermore, SOR-SVD can utilize advanced computer architectures and, as a result, it can be optimized for maximum efficiency. The SOR-SVD algorithm is simple, accurate, and provably correct, and outperforms previously reported techniques in terms of accuracy and efficiency.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Metagenomic composition analysis of sedimentary ancient DNA from the Isle of Wight.\n \n \n \n \n\n\n \n Pratas, D.; and Pinho, A. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1177-1181, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MetagenomicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553297,\n  author = {D. Pratas and A. J. Pinho},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Metagenomic composition analysis of sedimentary ancient DNA from the Isle of Wight},\n  year = {2018},\n  pages = {1177-1181},\n  abstract = {The DNA from several organisms is sequenced conjointly in metagenomics. This allows searching for exogenous microorganisms contained in the samples, with the goal of studying the evolution and co-evolution of host-pathogen, namely for building better diagnostics and therapeutics. However, the quantity and quality of the DNA present in the samples is very poor, pushing the responsibility of analysis improvements into the development of better computational methods. Here, we develop a new processing paradigm to infer the metagenomic composition analysis based on the relative compression of whole genome sequences. Using this method, we present the metagenomic composition analysis of a sedimentary ancient DNA sample, dated to 8,000 years before the present, from the Isle of Wight, United Kingdom. The results show several viruses and bacteria expressing high levels of similarity relative to the samples, namely a circular virus similar to the Avon-Heathcote estuary virus 14 sequenced in New Zealand.},\n  keywords = {biology computing;DNA;genomics;microorganisms;molecular biophysics;sediments;metagenomic composition analysis;sedimentary ancient DNA sample;Wight Isle;exogenous microorganisms;host-pathogen;computational methods;relative compression;genome sequences;bacteria;Avon-Heathcote estuary virus;Viruses (medical);Genomics;Bioinformatics;DNA;Databases;Microorganisms;Sequential analysis;metagenomics;ancient DNA;data compression},\n  doi = {10.23919/EUSIPCO.2018.8553297},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439333.pdf},\n}\n\n
\n
\n\n\n
\n The DNA from several organisms is sequenced conjointly in metagenomics. This allows searching for exogenous microorganisms contained in the samples, with the goal of studying the evolution and co-evolution of host-pathogen, namely for building better diagnostics and therapeutics. However, the quantity and quality of the DNA present in the samples is very poor, pushing the responsibility of analysis improvements into the development of better computational methods. Here, we develop a new processing paradigm to infer the metagenomic composition analysis based on the relative compression of whole genome sequences. Using this method, we present the metagenomic composition analysis of a sedimentary ancient DNA sample, dated to 8,000 years before the present, from the Isle of Wight, United Kingdom. The results show several viruses and bacteria expressing high levels of similarity relative to the samples, namely a circular virus similar to the Avon-Heathcote estuary virus 14 sequenced in New Zealand.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Time-Frequency-Bin-Wise Beamformer Selection and Masking for Speech Enhancement in Underdetermined Noisy Scenarios.\n \n \n \n \n\n\n \n Yamaoka, K.; Brendel, A.; Ono, N.; Makino, S.; Buerger, M.; Yamada, T.; and Kellermann, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1582-1586, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Time-Frequency-Bin-WisePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553299,\n  author = {K. Yamaoka and A. Brendel and N. Ono and S. Makino and M. Buerger and T. Yamada and W. Kellermann},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Time-Frequency-Bin-Wise Beamformer Selection and Masking for Speech Enhancement in Underdetermined Noisy Scenarios},\n  year = {2018},\n  pages = {1582-1586},\n  abstract = {In this paper, we present a speech enhancement method using two microphones for underdetermined situations. A conventional speech enhancement method for underdetermined situations is time-frequency masking, where speech is enhanced by multiplying zero or one to each time-frequency component appropriately. Extending this method, we switch multiple preconstructed beamformers at each time-frequency bin, each of which suppresses a particular interferer. This method can suppress an interferer even when both the target and an interferer are simultaneously active at a given time-frequency bin. As a switching criterion, selection of minimum value of the outputs of the all beamformers at each time-frequency bin is investigated. Additionally, another method using direction of arrival estimation is also investigated. In experiments, we confirmed that the proposed methods were superior to conventional time-frequency masking and fixed beamforming in the performance of speech enhancement.},\n  keywords = {array signal processing;direction-of-arrival estimation;interference suppression;speech enhancement;time-frequency analysis;speech enhancement method;interference suppression;time-frequency-bin-wise beamformer masking;microphones;direction of arrival estimation;multiple preconstructed beamformers;underdetermined noisy scenarios;time-frequency-bin-wise beamformer selection;Time-frequency analysis;Direction-of-arrival estimation;Speech enhancement;Microphones;Estimation;Array signal processing;Distortion},\n  doi = {10.23919/EUSIPCO.2018.8553299},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439259.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we present a speech enhancement method using two microphones for underdetermined situations. A conventional speech enhancement method for underdetermined situations is time-frequency masking, where speech is enhanced by multiplying zero or one to each time-frequency component appropriately. Extending this method, we switch multiple preconstructed beamformers at each time-frequency bin, each of which suppresses a particular interferer. This method can suppress an interferer even when both the target and an interferer are simultaneously active at a given time-frequency bin. As a switching criterion, selection of minimum value of the outputs of the all beamformers at each time-frequency bin is investigated. Additionally, another method using direction of arrival estimation is also investigated. In experiments, we confirmed that the proposed methods were superior to conventional time-frequency masking and fixed beamforming in the performance of speech enhancement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Privacy-Preserving Identification via Layered Sparse Code Design: Distributed Servers and Multiple Access Authorization.\n \n \n \n \n\n\n \n Razeghi, B.; Voloshynovskiy, S.; Ferdowsi, S.; and Kostadinov, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2578-2582, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Privacy-PreservingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553300,\n  author = {B. Razeghi and S. Voloshynovskiy and S. Ferdowsi and D. Kostadinov},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Privacy-Preserving Identification via Layered Sparse Code Design: Distributed Servers and Multiple Access Authorization},\n  year = {2018},\n  pages = {2578-2582},\n  abstract = {We propose a new computationally efficient privacy-preserving identification framework based on layered sparse coding. The key idea of the proposed framework is a sparsifying transform learning with ambiguization, which consists of a trained linear map, a component-wise nonlinearity and a privacy amplification. We introduce a practical identification framework, which consists of two phases: public and private identification. The public untrusted server provides the fast search service based on the sparse privacy protected codebook stored at its side. The private trusted server or the local client application performs the refined accurate similarity search using the results of the public search and the layered sparse codebooks stored at its side. The private search is performed in the decoded domain and also the accuracy of private search is chosen based on the authorization level of the client. The efficiency of the proposed method is in computational complexity of encoding, decoding, “encryption” (ambiguization) and “decryption” (purification) as well as storage complexity of the codebooks.},\n  keywords = {authorisation;client-server systems;computational complexity;cryptography;data privacy;decoding;sparse matrices;trusted computing;layered sparse code design;distributed servers;multiple access authorization;component-wise nonlinearity;privacy amplification;public identification;private identification;public untrusted server;fast search service;private trusted server;local client application;layered sparse codebooks;privacy-preserving identification framework;sparse privacy protected codebook;computational complexity;data privacy;transform learning;Servers;Transforms;Privacy;Channel coding;Europe;Signal processing;data privacy;sparse codebook;transform learning;successive refinement;ambiguization},\n  doi = {10.23919/EUSIPCO.2018.8553300},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437209.pdf},\n}\n\n
\n
\n\n\n
\n We propose a new computationally efficient privacy-preserving identification framework based on layered sparse coding. The key idea of the proposed framework is a sparsifying transform learning with ambiguization, which consists of a trained linear map, a component-wise nonlinearity and a privacy amplification. We introduce a practical identification framework, which consists of two phases: public and private identification. The public untrusted server provides the fast search service based on the sparse privacy protected codebook stored at its side. The private trusted server or the local client application performs the refined accurate similarity search using the results of the public search and the layered sparse codebooks stored at its side. The private search is performed in the decoded domain and also the accuracy of private search is chosen based on the authorization level of the client. The efficiency of the proposed method is in computational complexity of encoding, decoding, “encryption” (ambiguization) and “decryption” (purification) as well as storage complexity of the codebooks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fingerprint Minutiae Matching Through Sparse Cross-correlation.\n \n \n \n \n\n\n \n Hine, G. E.; Maiorana, E.; and Campisi, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2370-2374, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FingerprintPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553301,\n  author = {G. E. Hine and E. Maiorana and P. Campisi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fingerprint Minutiae Matching Through Sparse Cross-correlation},\n  year = {2018},\n  pages = {2370-2374},\n  abstract = {In this paper, we introduce a novel minutiae-based matching algorithm for fingerprint recognition. The method is built on an elegant and straightforward mathematical formulation: the minutiae set is represented by a train of complex pulses and the matching algorithm is based on a simple crosscorrelation. We propose two different implementations. The first one exploits the intrinsic sparsity of the signal representing the minutiae set in order to construct an efficient implementation. The other relies on the Fourier transform to build a fixed-length representation, being thus suitable to be used in many biometric crypto-systems. The proposed method exhibits performance comparable with NIST's Bozorth3, that is a standard de facto for minutiae matching, but it shows to be more robust with cropped fingerprints.},\n  keywords = {biometrics (access control);fingerprint identification;Fourier transforms;image matching;matching algorithm;simple crosscorrelation;efficient implementation;fixed-length representation;method exhibits performance;cropped fingerprints;fingerprint minutiae matching;sparse cross-correlation;novel minutiae-based;fingerprint recognition;elegant formulation;straightforward mathematical formulation;complex pulses;Signal processing algorithms;Signal processing;Correlation;NIST;Frequency-domain analysis;Europe;Fingerprint recognition},\n  doi = {10.23919/EUSIPCO.2018.8553301},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439295.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a novel minutiae-based matching algorithm for fingerprint recognition. The method is built on an elegant and straightforward mathematical formulation: the minutiae set is represented by a train of complex pulses and the matching algorithm is based on a simple crosscorrelation. We propose two different implementations. The first one exploits the intrinsic sparsity of the signal representing the minutiae set in order to construct an efficient implementation. The other relies on the Fourier transform to build a fixed-length representation, being thus suitable to be used in many biometric crypto-systems. The proposed method exhibits performance comparable with NIST's Bozorth3, that is a standard de facto for minutiae matching, but it shows to be more robust with cropped fingerprints.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Ear Presentation Attack Detection: Benchmarking Study with First Lenslet Light Field Database.\n \n \n \n \n\n\n \n Sepas-Moghaddam, A.; Pereira, F.; and Correia, P. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2355-2359, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EarPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553302,\n  author = {A. Sepas-Moghaddam and F. Pereira and P. L. Correia},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Ear Presentation Attack Detection: Benchmarking Study with First Lenslet Light Field Database},\n  year = {2018},\n  pages = {2355-2359},\n  abstract = {Ear recognition has received broad attention from the biometric community and its emerging usage in multiple applications is raising new security concerns, with robustness against presentation attacks being a very active field of research. This paper addresses for the first time the ear presentation attack detection problem by developing an exhaustive benchmarking study on the performance of state-of-the-art light field and non-light field based ear presentation attack detection solutions. In this context, this paper also proposes an appropriate ear artefact database captured with a Lytro ILLUM lenslet light field camera, including both 2D and light field contents, using several types of presentation attack instruments, including laptop, tablet and two different mobile phones. Results show very promising performance for two recent light field based presentation attack detection solutions originally proposed for face presentation attack detection.},\n  keywords = {biometrics (access control);cameras;ear;face recognition;security of data;visual databases;lenslet light field database;ear recognition;biometric community;security concerns;ear presentation attack detection problem;exhaustive benchmarking study;nonlight field;ear presentation attack detection solutions;Lytro ILLUM lenslet light field camera;light field contents;presentation attack instruments;face presentation attack detection;ear artefact database;Ear;Databases;Two dimensional displays;Benchmark testing;Feature extraction;Cameras;Ear Presentation Attack Detection;Light Field Imaging;Artefact Database;Feature Extraction},\n  doi = {10.23919/EUSIPCO.2018.8553302},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570428081.pdf},\n}\n\n
\n
\n\n\n
\n Ear recognition has received broad attention from the biometric community and its emerging usage in multiple applications is raising new security concerns, with robustness against presentation attacks being a very active field of research. This paper addresses for the first time the ear presentation attack detection problem by developing an exhaustive benchmarking study on the performance of state-of-the-art light field and non-light field based ear presentation attack detection solutions. In this context, this paper also proposes an appropriate ear artefact database captured with a Lytro ILLUM lenslet light field camera, including both 2D and light field contents, using several types of presentation attack instruments, including laptop, tablet and two different mobile phones. Results show very promising performance for two recent light field based presentation attack detection solutions originally proposed for face presentation attack detection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Channel Estimation in Millimeter Wave Hybrid MIMO Systems with Low Resolution ADCs.\n \n \n \n \n\n\n \n Kaushik, A.; Vlachos, E.; Thompson, J.; and Perelli, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1825-1829, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553303,\n  author = {A. Kaushik and E. Vlachos and J. Thompson and A. Perelli},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Channel Estimation in Millimeter Wave Hybrid MIMO Systems with Low Resolution ADCs},\n  year = {2018},\n  pages = {1825-1829},\n  abstract = {This paper proposes an efficient channel estimation algorithm for millimeter wave (mmWave) systems with a hybrid analog-digital multiple-input multiple-output (MIMO) architecture and few-bits quantization at the receiver. The sparsity of the mmWave MIMO channel is exploited for the problem formulation while limited resolution analog-to-digital converters (ADCs) are used in the receiver architecture. The estimation problem can be tackled using compressed sensing through the Stein's unbiased risk estimate (SURE) based parametric denoiser with the generalized approximate message passing (GAMP) framework. Expectation-maximization (EM) density estimation is used to avoid the need of specifying channel statistics resulting the EM-SURE-GAMP algorithm to estimate the channel. SURE, depending on the noisy observation, is minimized to adaptively optimize the denoiser within the parametric class at each iteration. The proposed solution is compared with the expectation-maximization generalized AMP (EM-GAMP) solution and the mean square error (MSE) performs better with respect to low and high signal-to-noise ratio (SNR) regimes, the number of ADC bits, and the training length. The use of the low resolution ADCs reduces power consumption and leads to an efficient mmWave MIMO system.},\n  keywords = {analogue-digital conversion;channel estimation;compressed sensing;expectation-maximisation algorithm;iterative methods;least mean squares methods;mean square error methods;message passing;MIMO communication;low resolution ADCs;millimeter wave hybrid MIMO systems;efficient channel estimation algorithm;few-bits quantization;mmWave MIMO channel;resolution analog-to-digital converters;receiver architecture;compressed sensing;Stein's unbiased risk estimate;parametric denoiser;generalized approximate message passing framework;expectation-maximization density estimation;channel statistics;EM-SURE-GAMP algorithm;parametric class;expectation-maximization generalized AMP solution;EM-GAMP;low signal-to-noise ratio;high signal-to-noise ratio;ADC bits;hybrid analog-digital multiple-input architecture;MIMO communication;Receivers;Channel estimation;Radio frequency;Transmitters;Sparse matrices;Signal resolution;channel estimation;low resolution analog-to-digital converter (ADC);compressed sensing;mmWave MIMO},\n  doi = {10.23919/EUSIPCO.2018.8553303},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435875.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes an efficient channel estimation algorithm for millimeter wave (mmWave) systems with a hybrid analog-digital multiple-input multiple-output (MIMO) architecture and few-bits quantization at the receiver. The sparsity of the mmWave MIMO channel is exploited for the problem formulation while limited resolution analog-to-digital converters (ADCs) are used in the receiver architecture. The estimation problem can be tackled using compressed sensing through the Stein's unbiased risk estimate (SURE) based parametric denoiser with the generalized approximate message passing (GAMP) framework. Expectation-maximization (EM) density estimation is used to avoid the need of specifying channel statistics resulting the EM-SURE-GAMP algorithm to estimate the channel. SURE, depending on the noisy observation, is minimized to adaptively optimize the denoiser within the parametric class at each iteration. The proposed solution is compared with the expectation-maximization generalized AMP (EM-GAMP) solution and the mean square error (MSE) performs better with respect to low and high signal-to-noise ratio (SNR) regimes, the number of ADC bits, and the training length. The use of the low resolution ADCs reduces power consumption and leads to an efficient mmWave MIMO system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Two-Step Hybrid Multiuser Equalizer for Sub-Connected mmWave Massive MIMO SC-FDMA Systems.\n \n \n \n \n\n\n \n Magueta, R.; Castanheira, D.; Silva, A.; Dinis, R.; and Gameiro, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 822-826, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Two-StepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553304,\n  author = {R. Magueta and D. Castanheira and A. Silva and R. Dinis and A. Gameiro},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Two-Step Hybrid Multiuser Equalizer for Sub-Connected mmWave Massive MIMO SC-FDMA Systems},\n  year = {2018},\n  pages = {822-826},\n  abstract = {Most of the works consider hybrid fully connected architectures to overcome the constraints of millimeter wave massive MIMO systems. However, these schemes require a one-to-one connection between the RF chains and antennas. In this paper we propose a two-step broadband multiuser equalizer for hybrid sub-connected architectures, which is a more realistic approach for practical systems. The low-complexity user-terminals employ only analog precoders computed from the knowledge of the average angle of departure of each cluster, which is constant over the bandwidth. At the receiver side, we design a hybrid multi-user equalizer by minimizing the average bit-error-rate. A two-step approach is considered, where the analog part is constant over the iterations due to hardware constraints and the digital part is iterative. The analog part is also constant over all subcarriers while the digital part is computed on a per subcarrier basis. The proposed sub-connected based equalizer is compared with the fully connected counterpart. The results show that the performance of the proposed scheme is close to the fully connected one after just a few iterations performed at the digital domain.},\n  keywords = {communication complexity;equalisers;error statistics;frequency division multiple access;iterative methods;millimetre wave communication;MIMO communication;precoding;millimeter wave massive MIMO systems;antennas;low-complexity user-terminals;analog precoders;iteration method;subconnected mmwave massive MIMO SC-FDMA systems;hybrid fully connected architectures;two-step hybrid broadband multiuser equalizer;hybrid subconnected architectures;average bit-error-rate minimisation;Equalizers;Radio frequency;MIMO communication;Europe;Computer architecture;Broadband communication;hybrid multi-user equalizer;massive MIMO;millimeter-wave communications;hybrid sub-con-nected analog-digital architectures},\n  doi = {10.23919/EUSIPCO.2018.8553304},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434699.pdf},\n}\n\n
\n
\n\n\n
\n Most of the works consider hybrid fully connected architectures to overcome the constraints of millimeter wave massive MIMO systems. However, these schemes require a one-to-one connection between the RF chains and antennas. In this paper we propose a two-step broadband multiuser equalizer for hybrid sub-connected architectures, which is a more realistic approach for practical systems. The low-complexity user-terminals employ only analog precoders computed from the knowledge of the average angle of departure of each cluster, which is constant over the bandwidth. At the receiver side, we design a hybrid multi-user equalizer by minimizing the average bit-error-rate. A two-step approach is considered, where the analog part is constant over the iterations due to hardware constraints and the digital part is iterative. The analog part is also constant over all subcarriers while the digital part is computed on a per subcarrier basis. The proposed sub-connected based equalizer is compared with the fully connected counterpart. The results show that the performance of the proposed scheme is close to the fully connected one after just a few iterations performed at the digital domain.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adversarial Multimedia Forensics: Overview and Challenges Ahead.\n \n \n \n \n\n\n \n Barni, M.; Stamm, M. C.; and Tondi, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 962-966, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AdversarialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553305,\n  author = {M. Barni and M. C. Stamm and B. Tondi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Adversarial Multimedia Forensics: Overview and Challenges Ahead},\n  year = {2018},\n  pages = {962-966},\n  abstract = {In recent decades, a significant research effort has been devoted to the development of forensic tools for retrieving information and detecting possible tampering of multimedia documents. A number of counter-forensic tools have been developed as well in order to impede a correct analysis. Such tools are often very effective due to the vulnerability of multimedia forensics tools, which are not designed to work in an adversarial environment. In this scenario, developing forensic techniques capable of granting good performance even in the presence of an adversary aiming at impeding the forensic analysis, is becoming a necessity. This turns out to be a difficult task, given the weakness of the traces the forensic analysis usually relies on. The goal of this paper is to provide an overview of the advances made over the last decade in the field of adversarial multimedia forensics. We first consider the view points of the forensic analyst and the attacker independently, then we review some of the attempts made to simultaneously take into account both perspectives by resorting to game theory. Eventually, we discuss the hottest open problems and outline possible paths for future research.},\n  keywords = {digital forensics;game theory;information retrieval;multimedia computing;adversarial multimedia forensics;multimedia documents;counter-forensic tools;multimedia forensics tools;adversarial environment;forensic techniques;forensic analysis;game theory;tampering detection;information retrieval;Forensics;Detectors;Tools;Signal processing algorithms;Transform coding;Distortion;Cameras},\n  doi = {10.23919/EUSIPCO.2018.8553305},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437839.pdf},\n}\n\n
\n
\n\n\n
\n In recent decades, a significant research effort has been devoted to the development of forensic tools for retrieving information and detecting possible tampering of multimedia documents. A number of counter-forensic tools have been developed as well in order to impede a correct analysis. Such tools are often very effective due to the vulnerability of multimedia forensics tools, which are not designed to work in an adversarial environment. In this scenario, developing forensic techniques capable of granting good performance even in the presence of an adversary aiming at impeding the forensic analysis, is becoming a necessity. This turns out to be a difficult task, given the weakness of the traces the forensic analysis usually relies on. The goal of this paper is to provide an overview of the advances made over the last decade in the field of adversarial multimedia forensics. We first consider the view points of the forensic analyst and the attacker independently, then we review some of the attempts made to simultaneously take into account both perspectives by resorting to game theory. Eventually, we discuss the hottest open problems and outline possible paths for future research.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n FastFCA: Joint Diagonalization Based Acceleration of Audio Source Separation Using a Full-Rank Spatial Covariance Model.\n \n \n \n \n\n\n \n Ito, N.; Araki, S.; and Nakatani, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1667-1671, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FastFCA:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553306,\n  author = {N. Ito and S. Araki and T. Nakatani},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {FastFCA: Joint Diagonalization Based Acceleration of Audio Source Separation Using a Full-Rank Spatial Covariance Model},\n  year = {2018},\n  pages = {1667-1671},\n  abstract = {Here we propose an accelerated version of one of the most promising methods for audio source separation proposed by Duong et al. [“Under-determined reverberant audio source separation using a full-rank spatial covariance model,” IEEE Trans. ASLP, vol. 18, no. 7, pp. 1830-1840, Sep. 2010]. We refer to this conventional method as full-rank spatial covariance analysis (FCA), and the proposed method as FastFCA. A major drawback of the conventional FCA is computational complexity: inversion and multiplication of covariance matrices are required at each time-frequency point and each EM iteration. To overcome this drawback, the proposed FastFCA diagonalizes the covariance matrices jointly based on the generalized eigenvalue problem. This leads to significantly reduced computational complexity of the FastFCA, because the complexity of matrix inversion and matrix multiplication for diagonal matrices is O(M) instead of O(M3) (M: matrix order). Furthermore, the FastFCA is rigorously equivalent to the FCA, and therefore the reduction in computational complexity is realized without degradation in source separation performance. An experiment showed that the FastFCA was over 250 times faster than the FCA with virtually no degradation in source separation performance. In this paper, we focus on the two-source case, while the case of more than two sources is treated in a separate paper.},\n  keywords = {audio signal processing;blind source separation;computational complexity;covariance analysis;covariance matrices;eigenvalues and eigenfunctions;reverberation;source separation;time-frequency analysis;full-rank spatial covariance analysis;FastFCA;conventional FCA;covariance matrices;reduced computational complexity;matrix inversion;matrix multiplication;diagonal matrices;source separation performance;two-source case;joint diagonalization based acceleration;accelerated version;Under-determined reverberant audio source separation;generalized eigenvalue problem;Covariance matrices;Signal processing algorithms;Source separation;Complexity theory;Time-frequency analysis;Eigenvalues and eigenfunctions;Microphones},\n  doi = {10.23919/EUSIPCO.2018.8553306},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437917.pdf},\n}\n\n
\n
\n\n\n
\n Here we propose an accelerated version of one of the most promising methods for audio source separation proposed by Duong et al. [“Under-determined reverberant audio source separation using a full-rank spatial covariance model,” IEEE Trans. ASLP, vol. 18, no. 7, pp. 1830-1840, Sep. 2010]. We refer to this conventional method as full-rank spatial covariance analysis (FCA), and the proposed method as FastFCA. A major drawback of the conventional FCA is computational complexity: inversion and multiplication of covariance matrices are required at each time-frequency point and each EM iteration. To overcome this drawback, the proposed FastFCA diagonalizes the covariance matrices jointly based on the generalized eigenvalue problem. This leads to significantly reduced computational complexity of the FastFCA, because the complexity of matrix inversion and matrix multiplication for diagonal matrices is O(M) instead of O(M3) (M: matrix order). Furthermore, the FastFCA is rigorously equivalent to the FCA, and therefore the reduction in computational complexity is realized without degradation in source separation performance. An experiment showed that the FastFCA was over 250 times faster than the FCA with virtually no degradation in source separation performance. In this paper, we focus on the two-source case, while the case of more than two sources is treated in a separate paper.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved Direct-path Dominance Test for Speaker Localization in Reverberant Environments.\n \n \n \n \n\n\n \n Madmoni, L.; and Rafaely, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2424-2428, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553308,\n  author = {L. Madmoni and B. Rafaely},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved Direct-path Dominance Test for Speaker Localization in Reverberant Environments},\n  year = {2018},\n  pages = {2424-2428},\n  abstract = {Speaker localization in real environments is a fundamental task for many audio signal processing applications. Many localization methods fail when the environment imposes challenging conditions, such as reverberation. Recently, a method for direction of arrival (DOA) estimation of speakers in reverberant environments was developed, which utilizes spherical arrays. This method uses the direct-path dominance (DPD) test to select time-frequency bins that contain spatial information on the direct sound. In this work, it is shown that when the threshold of the DPD test is lowered to select more bins for the estimation process, it falsely identifies bins dominated by reverberant sound, reducing DOA estimation accuracy. In this paper, a new DPD test is developed, which evaluates the extent to which the measured plane-wave density can be represented by a single plane-wave. While being more computationally expensive than the original test, it is more robust to reverberation, and leads to an improved DOA estimation. The latter is demonstrated by simulations of a speaker in a reverberant room.},\n  keywords = {audio signal processing;direction-of-arrival estimation;reverberation;speaker recognition;audio signal processing applications;reverberation;reverberant environments;spherical arrays;direct-path dominance test;time-frequency bins;direct sound;DPD test;reverberant sound;DOA estimation accuracy;measured plane-wave density;improved DOA estimation;reverberant room;speaker localization;Direction-of-arrival estimation;Estimation;Reverberation;Microphones;Time-frequency analysis;Europe;Speaker localization;reverberation;spherical arrays},\n  doi = {10.23919/EUSIPCO.2018.8553308},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570431432.pdf},\n}\n\n
\n
\n\n\n
\n Speaker localization in real environments is a fundamental task for many audio signal processing applications. Many localization methods fail when the environment imposes challenging conditions, such as reverberation. Recently, a method for direction of arrival (DOA) estimation of speakers in reverberant environments was developed, which utilizes spherical arrays. This method uses the direct-path dominance (DPD) test to select time-frequency bins that contain spatial information on the direct sound. In this work, it is shown that when the threshold of the DPD test is lowered to select more bins for the estimation process, it falsely identifies bins dominated by reverberant sound, reducing DOA estimation accuracy. In this paper, a new DPD test is developed, which evaluates the extent to which the measured plane-wave density can be represented by a single plane-wave. While being more computationally expensive than the original test, it is more robust to reverberation, and leads to an improved DOA estimation. The latter is demonstrated by simulations of a speaker in a reverberant room.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Distant Noise Reduction Based on Multi-delay Noise Model Using Distributed Microphone Array.\n \n \n \n \n\n\n \n Koizumi, Y.; Saito, S.; Shimauchi, S.; Kobayashi, K.; and Harada, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1-5, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DistantPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553309,\n  author = {Y. Koizumi and S. Saito and S. Shimauchi and K. Kobayashi and N. Harada},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Distant Noise Reduction Based on Multi-delay Noise Model Using Distributed Microphone Array},\n  year = {2018},\n  pages = {1-5},\n  abstract = {We propose a novel framework for reducing distant noise by using a distributed microphone array; reducing noise propagated from a far distance in real-time. Previous studies have revealed that a distributed microphone array with an instantaneous mixing assumption can effectively reduce noise when the target and noise sources are significantly far apart. However, in distant noise reduction, the target and noise sources are not usually instantaneously mixed because the reverberation-and propagation-time from the noise sources to a microphone is longer than the short-time Fourier transform (STFT) length. To express reverberation- and propagation-parameters, we introduce a multi-delay noise model that represents the reverberation-time as a convolution of the transfer-function-gains and the noise sources and the propagation-time as time-frame delays. These parameters are estimated on the basis of the maximum a posteriori (MAP) estimation. Experimental results show that the proposed method outperformed conventional methods in several performance measurements and could reduce distant noise propagated from more than 100 m away in a real-environment.},\n  keywords = {acoustic signal processing;Fourier transforms;microphone arrays;reverberation;multidelay noise model;distributed microphone array;noise sources;distant noise reduction;propagation-time;reverberation-time;time-frame delays;Noise reduction;Microphone arrays;Reverberation;Probabilistic logic;Time-frequency analysis;Europe;Distant noise reduction;distributed microphone array;MAP estimation;and transfer function},\n  doi = {10.23919/EUSIPCO.2018.8553309},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570429809.pdf},\n}\n\n
\n
\n\n\n
\n We propose a novel framework for reducing distant noise by using a distributed microphone array; reducing noise propagated from a far distance in real-time. Previous studies have revealed that a distributed microphone array with an instantaneous mixing assumption can effectively reduce noise when the target and noise sources are significantly far apart. However, in distant noise reduction, the target and noise sources are not usually instantaneously mixed because the reverberation-and propagation-time from the noise sources to a microphone is longer than the short-time Fourier transform (STFT) length. To express reverberation- and propagation-parameters, we introduce a multi-delay noise model that represents the reverberation-time as a convolution of the transfer-function-gains and the noise sources and the propagation-time as time-frame delays. These parameters are estimated on the basis of the maximum a posteriori (MAP) estimation. Experimental results show that the proposed method outperformed conventional methods in several performance measurements and could reduce distant noise propagated from more than 100 m away in a real-environment.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Convolving Gaussian Kernels for RNN-Based Beat Tracking.\n \n \n \n \n\n\n \n Cheng, T.; Fukayama, S.; and Goto, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1905-1909, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ConvolvingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553310,\n  author = {T. Cheng and S. Fukayama and M. Goto},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Convolving Gaussian Kernels for RNN-Based Beat Tracking},\n  year = {2018},\n  pages = {1905-1909},\n  abstract = {Because of an ability of modelling context information, Recurrent Neural Networks (RNNs) or bi-directional RNNs (BRNNs) have been used for beat tracking with good performance. However, there are two problems associated with RNN-based beat tracking. The first problem is the imbalanced data: usually only around 2% frames are labelled as `beat'. The second one is the disagreement on the precise positions of beats in human annotations or the delay of annotations caused by human tapping. In order to tackle these problems, we propose to convolve the original ground truth with a Gaussian kernel as the target output of the network for a more robust training. We conduct a comparison experiment using five different Gaussian kernels on five individual datasets. The results on the validation sets show that we can train a better or at least competitive model in a shorter time by using the convolved ground truth with a proper Gaussian kernel.},\n  keywords = {audio signal processing;Gaussian processes;recurrent neural nets;bi-directional RNN;recurrent neural networks;Gaussian kernel;convolved ground truth;human annotations;RNN-based beat tracking;Training;Kernel;Standards;Recurrent neural networks;Bidirectional control;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553310},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437485.pdf},\n}\n\n
\n
\n\n\n
\n Because of an ability of modelling context information, Recurrent Neural Networks (RNNs) or bi-directional RNNs (BRNNs) have been used for beat tracking with good performance. However, there are two problems associated with RNN-based beat tracking. The first problem is the imbalanced data: usually only around 2% frames are labelled as `beat'. The second one is the disagreement on the precise positions of beats in human annotations or the delay of annotations caused by human tapping. In order to tackle these problems, we propose to convolve the original ground truth with a Gaussian kernel as the target output of the network for a more robust training. We conduct a comparison experiment using five different Gaussian kernels on five individual datasets. The results on the validation sets show that we can train a better or at least competitive model in a shorter time by using the convolved ground truth with a proper Gaussian kernel.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Cyclostationarity-Based Signal Detection.\n \n \n \n \n\n\n \n Napolitano, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 727-731, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553311,\n  author = {A. Napolitano},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On Cyclostationarity-Based Signal Detection},\n  year = {2018},\n  pages = {727-731},\n  abstract = {A new cyclostationarity-based signal detector is proposed. It is based on (conjugate) cyclic autocorrelation measurements at pairs of cycle frequencies and lags for which the signal-of-interest exhibits cyclostationarity while the disturbance does not. No assumption is made on the noise distribution and/or its stationarity. A comparison is made with a previously proposed statistical test for presence of cyclostationarity. Monte Carlo simulations are carried out for performance analysis.},\n  keywords = {correlation methods;Monte Carlo methods;signal detection;statistical testing;cycle frequencies;cyclic autocorrelation measurements;cyclostationarity-based signal detector;cyclostationarity-based signal detection;Detectors;Correlation;Covariance matrices;Signal detection;Europe;Cognitive radio;Cyclostationarity;Detection},\n  doi = {10.23919/EUSIPCO.2018.8553311},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436675.pdf},\n}\n\n
\n
\n\n\n
\n A new cyclostationarity-based signal detector is proposed. It is based on (conjugate) cyclic autocorrelation measurements at pairs of cycle frequencies and lags for which the signal-of-interest exhibits cyclostationarity while the disturbance does not. No assumption is made on the noise distribution and/or its stationarity. A comparison is made with a previously proposed statistical test for presence of cyclostationarity. Monte Carlo simulations are carried out for performance analysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Object Detection on Compressive Measurements using Correlation Filters and Sparse Representation.\n \n \n \n \n\n\n \n Vargas, H.; Fonseca, Y.; and Arguello, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1960-1964, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ObjectPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553312,\n  author = {H. Vargas and Y. Fonseca and H. Arguello},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Object Detection on Compressive Measurements using Correlation Filters and Sparse Representation},\n  year = {2018},\n  pages = {1960-1964},\n  abstract = {Compressive cameras acquire measurements of a scene using random projections instead of sampling at Nyquist rate. Several reconstruction algorithms have been proposed, taking advantage of previous knowledge about the scene. However, some inference tasks require to determine only certain information of the scene without incurring in the high computational reconstruction step. By reducing the computation load related to the reconstruction problem, this paper proposes a computationally efficient object detection approach based on correlation filters and sparse representation that operate over compressive measurements. We consider the problem of object detection in remote sensing scenes with multi-band images, where the pixels are expensive. The correlation filters are designed using explicit knowledge of the target appearance and the target shape to provide a way to recognize the objects from compressive measurements. Numerical experiments show the validity and efficiency of the proposed method in terms of peak-to-side lobe ratio using simulated data.},\n  keywords = {cameras;correlation methods;filtering theory;geophysical image processing;image reconstruction;image representation;image sensors;object detection;remote sensing;compressive measurements;correlation filters;sparse representation;compressive cameras acquire measurements;random projections;Nyquist rate;reconstruction algorithms;high computational reconstruction step;computation load;computationally efficient object detection approach;remote sensing scenes;peak-to-side lobe ratio;simulated data;Image coding;Correlation;Object detection;Two dimensional displays;Optimization;Image reconstruction;Mathematical model;Object detection;compressive measurements;correlation filters;sparse representation},\n  doi = {10.23919/EUSIPCO.2018.8553312},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439362.pdf},\n}\n\n
\n
\n\n\n
\n Compressive cameras acquire measurements of a scene using random projections instead of sampling at Nyquist rate. Several reconstruction algorithms have been proposed, taking advantage of previous knowledge about the scene. However, some inference tasks require to determine only certain information of the scene without incurring in the high computational reconstruction step. By reducing the computation load related to the reconstruction problem, this paper proposes a computationally efficient object detection approach based on correlation filters and sparse representation that operate over compressive measurements. We consider the problem of object detection in remote sensing scenes with multi-band images, where the pixels are expensive. The correlation filters are designed using explicit knowledge of the target appearance and the target shape to provide a way to recognize the objects from compressive measurements. Numerical experiments show the validity and efficiency of the proposed method in terms of peak-to-side lobe ratio using simulated data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Randomly Sketched Sparse Subspace Clustering for Acoustic Scene Clustering.\n \n \n \n \n\n\n \n Li, S.; and Wang, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2489-2493, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RandomlyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553314,\n  author = {S. Li and W. Wang},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Randomly Sketched Sparse Subspace Clustering for Acoustic Scene Clustering},\n  year = {2018},\n  pages = {2489-2493},\n  abstract = {Acoustic scene classification has drawn much research attention where labeled data are often used for model training. However, in practice, acoustic data are often unlabeled, weakly labeled, or incorrectly labeled. To classify unlabeled data, or detect and correct wrongly labeled data, we present an unsupervised clustering method based on sparse subspace clustering. The computational cost of the sparse subspace clustering algorithm becomes prohibitively high when dealing with high dimensional acoustic features. To address this problem, we introduce a random sketching method to reduce the feature dimensionality for the sparse subspace clustering algorithm. Experimental results reveal that this method can reduce the computational cost significantly with a limited loss in clustering accuracy.},\n  keywords = {acoustic signal processing;pattern clustering;random processes;acoustic scene classification;randomly sketched sparse subspace clustering algorithm;random sketching method;unsupervised clustering method;Clustering algorithms;Acoustics;Signal processing algorithms;Sparse matrices;Computational efficiency;Probability density function;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553314},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437405.pdf},\n}\n\n
\n
\n\n\n
\n Acoustic scene classification has drawn much research attention where labeled data are often used for model training. However, in practice, acoustic data are often unlabeled, weakly labeled, or incorrectly labeled. To classify unlabeled data, or detect and correct wrongly labeled data, we present an unsupervised clustering method based on sparse subspace clustering. The computational cost of the sparse subspace clustering algorithm becomes prohibitively high when dealing with high dimensional acoustic features. To address this problem, we introduce a random sketching method to reduce the feature dimensionality for the sparse subspace clustering algorithm. Experimental results reveal that this method can reduce the computational cost significantly with a limited loss in clustering accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Cooperative Renewable Energy Management with Distributed Generation.\n \n \n \n \n\n\n \n Leithon, J.; Werner, S.; and Koivunen, V.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 191-195, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CooperativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553316,\n  author = {J. Leithon and S. Werner and V. Koivunen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Cooperative Renewable Energy Management with Distributed Generation},\n  year = {2018},\n  pages = {191-195},\n  abstract = {We propose an energy cost minimization strategy for cooperating households equipped with renewable energy generation and storage capabilities. The participating households minimize their collective energy expenditure by sharing renewable energy through the grid. We assume location and time dependent electricity prices, as well as parametrized transfer fees. We then formulate an optimization problem to minimize the energy cost incurred by the participating households over any specified planning horizon. The proposed strategy serves as a performance benchmark for online energy management algorithms, and can be implemented in real time by incorporating adequate forecasting techniques. We solve the optimization problem through relaxation, and use simulations to illustrate the characteristics of the solution. These simulations show that energy sharing takes place when there are differences in the load/generation and price profiles across participants. We also show that no energy sharing takes place when the load is above the local generation at all times.},\n  keywords = {building management systems;distributed power generation;energy management systems;optimisation;power markets;pricing;renewable energy sources;cooperative renewable energy management;cooperating households;time dependent electricity prices;energy cost;online energy management algorithms;specified planning horizon;optimization problem;parametrized transfer fees;collective energy expenditure;participating households;storage capabilities;renewable energy generation;energy cost minimization strategy;distributed generation;renewable energy management;local generation;price profiles;load/generation;energy sharing;Electrostatic discharges;Optimization;Pricing;Renewable energy sources;Energy storage;Planning;Renewable energy optimization;storage management;non-convex optimization},\n  doi = {10.23919/EUSIPCO.2018.8553316},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437092.pdf},\n}\n\n
\n
\n\n\n
\n We propose an energy cost minimization strategy for cooperating households equipped with renewable energy generation and storage capabilities. The participating households minimize their collective energy expenditure by sharing renewable energy through the grid. We assume location and time dependent electricity prices, as well as parametrized transfer fees. We then formulate an optimization problem to minimize the energy cost incurred by the participating households over any specified planning horizon. The proposed strategy serves as a performance benchmark for online energy management algorithms, and can be implemented in real time by incorporating adequate forecasting techniques. We solve the optimization problem through relaxation, and use simulations to illustrate the characteristics of the solution. These simulations show that energy sharing takes place when there are differences in the load/generation and price profiles across participants. We also show that no energy sharing takes place when the load is above the local generation at all times.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Detection of Image Morphing by Topology-based Analysis.\n \n \n \n \n\n\n \n Jassim, S.; and Asaad, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1007-1011, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553317,\n  author = {S. Jassim and A. Asaad},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic Detection of Image Morphing by Topology-based Analysis},\n  year = {2018},\n  pages = {1007-1011},\n  abstract = {Topological Data Analysis (TDA) is an emerging framework for the understanding of Bigdata. This paper investigates and develops a TDA approach to image forensics that exploits the sensitivity to image tampering of a variety of persistent homological invariants of simplicial complexes constructed for certain automatically computed image texture landmarks. For each image, we construct sequences of simplicial complexes, whose vertices are the selected set of landmarks, for a sequence of distance thresholds and use a variety of homological invariants (e.g. number of connected components) to distinguish natural face images from morphed ones. We shall demonstrate the richness of TDA in dealing with image tampering by testing the performance of this approach on a large benchmark image dataset of passport photos in detecting various known morphing attacks.},\n  keywords = {Big Data;data analysis;face recognition;image forensics;image morphing;image sequences;image texture;topology;Big Data;TDA;image texture;image forensics;Topological Data Analysis;image morphing;automatic detection;image tampering;Face;Image analysis;Detectors;Feature extraction;Shape;Tools;Europe;Image Morphing attacks;TDA;Simplicial Complexes;Local Binary Pattern;Persistent Homology},\n  doi = {10.23919/EUSIPCO.2018.8553317},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436097.pdf},\n}\n\n
\n
\n\n\n
\n Topological Data Analysis (TDA) is an emerging framework for the understanding of Bigdata. This paper investigates and develops a TDA approach to image forensics that exploits the sensitivity to image tampering of a variety of persistent homological invariants of simplicial complexes constructed for certain automatically computed image texture landmarks. For each image, we construct sequences of simplicial complexes, whose vertices are the selected set of landmarks, for a sequence of distance thresholds and use a variety of homological invariants (e.g. number of connected components) to distinguish natural face images from morphed ones. We shall demonstrate the richness of TDA in dealing with image tampering by testing the performance of this approach on a large benchmark image dataset of passport photos in detecting various known morphing attacks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Playlist-Based Tag Propagation for Improving Music Auto-Tagging.\n \n \n \n \n\n\n \n Lin, Y.; Chung, C.; and Chen, H. H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2270-2274, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Playlist-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553318,\n  author = {Y. Lin and C. Chung and H. H. Chen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Playlist-Based Tag Propagation for Improving Music Auto-Tagging},\n  year = {2018},\n  pages = {2270-2274},\n  abstract = {The performance of a music auto-tagging system highly relies on the quality of the training dataset. In particular, each training song should have sufficient relevant tags. Tag propagation is a technique that creates additional tags for a song by passing the tags from other similar songs. In this paper, we present a novel tag propagation approach that exploits the song coherence of a playlist to improve the training of an auto-tagging model. The main idea is to share the tags between neighboring songs in a playlist and to optimize the auto-tagging model through a multi-task objective function. We test the proposed playlist-based approach on a convolutional neural network for music auto-tagging and show that it can indeed provide a significant performance improvement.},\n  keywords = {audio signal processing;convolution;feedforward neural nets;learning (artificial intelligence);music;optimisation;auto-tagging model;playlist-based approach;playlist-based tag propagation;music auto-tagging system;novel tag propagation approach;song coherence;convolutional neural network;optimization;audio signals;Task analysis;Training;Linear programming;Convolutional neural networks;Signal processing algorithms;Europe;Signal processing;Music auto-tagging;tag propagation;multi-task learning;music playlist;convolutional neural network},\n  doi = {10.23919/EUSIPCO.2018.8553318},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437289.pdf},\n}\n\n
\n
\n\n\n
\n The performance of a music auto-tagging system highly relies on the quality of the training dataset. In particular, each training song should have sufficient relevant tags. Tag propagation is a technique that creates additional tags for a song by passing the tags from other similar songs. In this paper, we present a novel tag propagation approach that exploits the song coherence of a playlist to improve the training of an auto-tagging model. The main idea is to share the tags between neighboring songs in a playlist and to optimize the auto-tagging model through a multi-task objective function. We test the proposed playlist-based approach on a convolutional neural network for music auto-tagging and show that it can indeed provide a significant performance improvement.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Scale-Free Functional Connectivity Analysis from Source Reconstructed MEG Data.\n \n \n \n \n\n\n \n Rocca, D. L.; Ciuciu, P.; van Wassenhove , V.; Wendt, H.; Abry, P.; and Leonarduzzi, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1397-1401, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Scale-FreePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553319,\n  author = {D. L. Rocca and P. Ciuciu and V. {van Wassenhove} and H. Wendt and P. Abry and R. Leonarduzzi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Scale-Free Functional Connectivity Analysis from Source Reconstructed MEG Data},\n  year = {2018},\n  pages = {1397-1401},\n  abstract = {Scale-free dynamics, quantified as power law spectra from magnetoencepholagraphic (MEG) recordings of Human brain activity, may play an important role in cognition and behavior. To date, their characterization remain limited to univariate analysis. Independently, functional connectivity analysis usually entails uncovering interactions between remote brain regions. In MEG, specific indices (e.g., Imaginary coherence ICOH and weighted Phase Lag Index wPLI) were developed to quantify phase synchronization between time series reflecting activities of distant brain regions and applied to oscillatory regimes (e.g., α -band in (8, 12) Hz). No such indices has yet been developed for scale-free brain dynamics. Here, we propose to design new indices (w-ICOH and w-wPLI) based on complex wavelet analysis, dedicated to assess functional connectivity in the scale- free regime. Using synthetic multivariate scale-free data, we illustrate the potential and efficiency of these new indices to assess phase coupling in the scale-free dynamics range. From MEG data (36 individuals), we demonstrate that w-wPLI constitutes a highly sensitive index to capture significant and meaningful group-level changes of phase couplings in the scale-free (0.1, 1.5) Hz regime between rest and task conditions.},\n  keywords = {brain;cognition;electroencephalography;magnetoencephalography;medical signal processing;neurophysiology;time series;scale-free functional connectivity analysis;MEG data;power law spectra;magnetoencepholagraphic recordings;uncovering interactions;remote brain regions;Imaginary coherence ICOH;weighted Phase Lag Index wPLI;phase synchronization;time series reflecting activities;oscillatory regimes;scale-free brain dynamics;complex wavelet analysis;scale- free regime;synthetic multivariate scale-free data;phase coupling;scale-free dynamics range;human brain activity;Synchronization;Wavelet transforms;Coherence;Couplings;Brain;Task analysis},\n  doi = {10.23919/EUSIPCO.2018.8553319},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435481.pdf},\n}\n\n
\n
\n\n\n
\n Scale-free dynamics, quantified as power law spectra from magnetoencepholagraphic (MEG) recordings of Human brain activity, may play an important role in cognition and behavior. To date, their characterization remain limited to univariate analysis. Independently, functional connectivity analysis usually entails uncovering interactions between remote brain regions. In MEG, specific indices (e.g., Imaginary coherence ICOH and weighted Phase Lag Index wPLI) were developed to quantify phase synchronization between time series reflecting activities of distant brain regions and applied to oscillatory regimes (e.g., α -band in (8, 12) Hz). No such indices has yet been developed for scale-free brain dynamics. Here, we propose to design new indices (w-ICOH and w-wPLI) based on complex wavelet analysis, dedicated to assess functional connectivity in the scale- free regime. Using synthetic multivariate scale-free data, we illustrate the potential and efficiency of these new indices to assess phase coupling in the scale-free dynamics range. From MEG data (36 individuals), we demonstrate that w-wPLI constitutes a highly sensitive index to capture significant and meaningful group-level changes of phase couplings in the scale-free (0.1, 1.5) Hz regime between rest and task conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Localization in Elevation with Non-Individual Head-Related Transfer Functions: Comparing Predictions of Two Auditory Models.\n \n \n \n \n\n\n \n Barumerli, R.; Geronazzo, M.; and Avanzini, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2539-2543, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LocalizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553320,\n  author = {R. Barumerli and M. Geronazzo and F. Avanzini},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Localization in Elevation with Non-Individual Head-Related Transfer Functions: Comparing Predictions of Two Auditory Models},\n  year = {2018},\n  pages = {2539-2543},\n  abstract = {This paper explores the limits of human localization of sound sources when listening with non-individual Head-Related Transfer Functions (HRTFs), by simulating performances of a localization task in the mid-sagittal plane. Computational simulations are performed with the CIPIC HRTF database using two different auditory models which mimic human hearing processing from a functional point of view. Our methodology investigates the opportunity of using virtual experiments instead of time- and resource- demanding psychoacoustic tests, which could also lead to potentially unreliable results. Four different perceptual metrics were implemented in order to identify relevant differences between auditory models in a selection problem of best-available non-individual HRTFs. Results report a high correlation between the two models denoting an overall similar trend, however, we discuss discrepancies in the predictions which should be carefully considered for the applicability of our methodology to the HRTF selection problem.},\n  keywords = {acoustic signal processing;hearing;medical signal processing;nonindividual head-related transfer functions;localization task;human hearing processing;nonindividual HRTFs;human localization;auditory models;sound sources;mid-sagittal plane;computational simulations;virtual experiments;perceptual metrics;Computational modeling;Measurement;Predictive models;Databases;Acoustics;Torso;Ear},\n  doi = {10.23919/EUSIPCO.2018.8553320},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439407.pdf},\n}\n\n
\n
\n\n\n
\n This paper explores the limits of human localization of sound sources when listening with non-individual Head-Related Transfer Functions (HRTFs), by simulating performances of a localization task in the mid-sagittal plane. Computational simulations are performed with the CIPIC HRTF database using two different auditory models which mimic human hearing processing from a functional point of view. Our methodology investigates the opportunity of using virtual experiments instead of time- and resource- demanding psychoacoustic tests, which could also lead to potentially unreliable results. Four different perceptual metrics were implemented in order to identify relevant differences between auditory models in a selection problem of best-available non-individual HRTFs. Results report a high correlation between the two models denoting an overall similar trend, however, we discuss discrepancies in the predictions which should be carefully considered for the applicability of our methodology to the HRTF selection problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast and Accurate Gaussian Pyramid Construction by Extended Box Filtering.\n \n \n \n \n\n\n \n Konlambigue, S.; Pothin, J.; Honeine, P.; and Bensrhair, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 400-404, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553321,\n  author = {S. Konlambigue and J. Pothin and P. Honeine and A. Bensrhair},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast and Accurate Gaussian Pyramid Construction by Extended Box Filtering},\n  year = {2018},\n  pages = {400-404},\n  abstract = {Gaussian Pyramid (GP) is one of the most important representations in computer vision. However, the computation of G P is still challenging for real-time applications. In this paper, we propose a novel approach by investigating the extended box filters for an efficient Gaussian approximation. Taking advantages of the cascade configuration, tiny kernels and memory cache, we develop a fast and suitable algorithm for embedded systems, typically smartphones. Experiments with Android NDK show a 5× speed up compared to an optimized CPU-version of the Gaussian smoothing.},\n  keywords = {computer vision;embedded systems;Gaussian processes;image processing;operating system kernels;optimisation;smart phones;Gaussian Pyramid;GP;important representations;computer vision;real-time applications;extended box filters;efficient Gaussian approximation;cascade configuration;memory cache;suitable algorithm;Gaussian smoothing;Android NDK;Convolution;Kernel;Two dimensional displays;Signal processing algorithms;Europe;Computer vision;Gaussian pyramid;extended box filters;computer vision;SIFT},\n  doi = {10.23919/EUSIPCO.2018.8553321},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437148.pdf},\n}\n\n
\n
\n\n\n
\n Gaussian Pyramid (GP) is one of the most important representations in computer vision. However, the computation of G P is still challenging for real-time applications. In this paper, we propose a novel approach by investigating the extended box filters for an efficient Gaussian approximation. Taking advantages of the cascade configuration, tiny kernels and memory cache, we develop a fast and suitable algorithm for embedded systems, typically smartphones. Experiments with Android NDK show a 5× speed up compared to an optimized CPU-version of the Gaussian smoothing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Municipal Infrastructure Anomaly and Defect Detection.\n \n \n \n \n\n\n \n Chacra, D. A.; and Zelek, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2125-2129, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MunicipalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553322,\n  author = {D. A. Chacra and J. Zelek},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Municipal Infrastructure Anomaly and Defect Detection},\n  year = {2018},\n  pages = {2125-2129},\n  abstract = {Road quality assessment is a key task in a city's duties as it allows a city to operate more efficiently. This assessment means a city's budget can be allocated appropriately to make sure the city makes the most of its usually limited budget. However, this assessment still relies largely on manual annotation to generate the Overall Condition Index (OCI) of a pavement stretch. Manual surveying can be inaccurate, while on the other side of the spectrum a large portion of automatic surveying techniques rely on expensive equipment (such as laser line scanners). To solve this problem, we propose an automated infrastructure assessment method that relies on street view images for its input and uses a spectrum of computer vision and pattern recognition methods to generate its assessments. We first segment the pavement surface in the natural image. After this, we operate under the assumption that only the road pavement remains, and utilize a sliding window approach using Fisher Vector encoding to detect the defects in that pavement; with labelled data, we would also be able to classify the defect type (longitudinal crack, transverse crack, alligator crack, pothole ... etc.) at this stage. A weighed contour map within these distressed regions can be used to identify exact crack and defect locations. Combining this information allows us to determine severities and locations of individual defects in the image. We use a manually annotated dataset of Google Street View images in Hamilton, Ontario, Canada. We show promising results, achieving a 93% Fl-measure on crack region detection from perspective images.},\n  keywords = {computer vision;condition monitoring;cracks;feature extraction;image classification;image recognition;image segmentation;object detection;pattern recognition;road building;roads;structural engineering;city;budget;manual annotation;Condition Index;OCI;pavement stretch;manual surveying;automatic surveying techniques;expensive equipment;laser line scanners;key task;road quality assessment;defect detection;municipal infrastructure anomaly;perspective images;crack region detection;Google Street View images;manually annotated dataset;individual defects;defect locations;exact crack;alligator crack;transverse crack;longitudinal crack;defect type;Fisher Vector encoding;sliding window approach;road pavement;natural image;pavement surface;assessments;pattern recognition methods;computer vision;automated infrastructure assessment method;Roads;Urban areas;Image segmentation;Signal processing algorithms;Quality assessment;Google;Road Quality;Computer Vision;Machine Learning;Pattern Recognition},\n  doi = {10.23919/EUSIPCO.2018.8553322},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437793.pdf},\n}\n\n
\n
\n\n\n
\n Road quality assessment is a key task in a city's duties as it allows a city to operate more efficiently. This assessment means a city's budget can be allocated appropriately to make sure the city makes the most of its usually limited budget. However, this assessment still relies largely on manual annotation to generate the Overall Condition Index (OCI) of a pavement stretch. Manual surveying can be inaccurate, while on the other side of the spectrum a large portion of automatic surveying techniques rely on expensive equipment (such as laser line scanners). To solve this problem, we propose an automated infrastructure assessment method that relies on street view images for its input and uses a spectrum of computer vision and pattern recognition methods to generate its assessments. We first segment the pavement surface in the natural image. After this, we operate under the assumption that only the road pavement remains, and utilize a sliding window approach using Fisher Vector encoding to detect the defects in that pavement; with labelled data, we would also be able to classify the defect type (longitudinal crack, transverse crack, alligator crack, pothole ... etc.) at this stage. A weighed contour map within these distressed regions can be used to identify exact crack and defect locations. Combining this information allows us to determine severities and locations of individual defects in the image. We use a manually annotated dataset of Google Street View images in Hamilton, Ontario, Canada. We show promising results, achieving a 93% Fl-measure on crack region detection from perspective images.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Renewable Energy Optimization with Centralized and Distributed Generation.\n \n \n \n \n\n\n \n Leithon, J.; Werner, S.; and Koivunen, V.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 181-185, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RenewablePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553323,\n  author = {J. Leithon and S. Werner and V. Koivunen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Renewable Energy Optimization with Centralized and Distributed Generation},\n  year = {2018},\n  pages = {181-185},\n  abstract = {We propose optimization strategies for cooperating households with renewable energy generation and storage facilities. We consider two configurations: 1) households with shared access to an energy farm, and 2) households with their own renewable energy generator and storage device. The participants in the second configuration are allowed to exchange energy through the grid. Assuming location and time dependent electricity prices, and parametrized transfer fees, we formulate two optimization problems to minimize the energy cost incurred by the participating households in each configuration. We determine the optimal energy management strategies by solving the corresponding mathematical problems through relaxation and discretization. The proposed energy management strategies are genie-aided, and hence, they can be used to benchmark and devise online algorithms based on forecasting techniques. Finally, numerical results are provided to compare the two configurations.},\n  keywords = {distributed power generation;energy management systems;optimisation;pricing;renewable energy sources;corresponding mathematical problems;renewable energy optimization;distributed generation;optimization strategies;renewable energy generation;storage facilities;energy farm;renewable energy generator;storage device;time dependent electricity prices;parametrized transfer fees;optimization problems;energy cost;participating households;optimal energy management strategies;forecasting techniques;online algorithms;discretization;centralized generation;Electrostatic discharges;Optimization;Renewable energy sources;Energy storage;Signal processing;Production;Renewable energy;optimization;cooperation},\n  doi = {10.23919/EUSIPCO.2018.8553323},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437075.pdf},\n}\n\n
\n
\n\n\n
\n We propose optimization strategies for cooperating households with renewable energy generation and storage facilities. We consider two configurations: 1) households with shared access to an energy farm, and 2) households with their own renewable energy generator and storage device. The participants in the second configuration are allowed to exchange energy through the grid. Assuming location and time dependent electricity prices, and parametrized transfer fees, we formulate two optimization problems to minimize the energy cost incurred by the participating households in each configuration. We determine the optimal energy management strategies by solving the corresponding mathematical problems through relaxation and discretization. The proposed energy management strategies are genie-aided, and hence, they can be used to benchmark and devise online algorithms based on forecasting techniques. Finally, numerical results are provided to compare the two configurations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Subspace Clustering for Radar Detection.\n \n \n \n \n\n\n \n Breloy, A.; El Korso, M. N.; Panahi, A.; and Krim, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1602-1606, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553325,\n  author = {A. Breloy and M. N. {El Korso} and A. Panahi and H. Krim},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust Subspace Clustering for Radar Detection},\n  year = {2018},\n  pages = {1602-1606},\n  abstract = {Target detection embedded in a complex interference background such as jamming or strong clutter is an important problem in signal processing. Traditionally, statistical adaptive detection processes are built from a binary hypothesis test performing on a grid of steering vectors. This usually involves the estimation of the noise-plus-interference covariance matrix using i.i.d. samples assumed to be target-free. Moving from this paradigm, we exploit the fact that the interference (clutter and/orjammers) lies in a union of low-dimensional subspaces. Hence, the matrix of concatenated samples can be modeled as a sum of low-rank matrices (union of subspaces containing interferences) plus a sparse matrix times a dictionary of steering-vectors (rep-resenting the targets contribution). Recovering this factorization from the observation matrix allows to build detection maps for each sample. To perform such recovery, we propose a generalized version of the robust subspace recovery via bi-sparsity pursuitalgorithm [1]. Experimental results on a real data set highlight the interest of the approach.},\n  keywords = {clutter;covariance matrices;image representation;object detection;pattern clustering;radar detection;sparse matrices;low-dimensional subspaces;concatenated samples;low-rank matrices;interferences;sparse matrix times;steering-vectors;targets contribution;observation matrix;detection maps;robust subspace recovery;radar detection;target detection;complex interference background;jamming;signal processing;statistical adaptive detection processes;binary hypothesis test;steering vectors;noise-plus-interference covariance matrix;Sparse matrices;Covariance matrices;Clutter;Jamming;Dictionaries;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553325},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437690.pdf},\n}\n\n
\n
\n\n\n
\n Target detection embedded in a complex interference background such as jamming or strong clutter is an important problem in signal processing. Traditionally, statistical adaptive detection processes are built from a binary hypothesis test performing on a grid of steering vectors. This usually involves the estimation of the noise-plus-interference covariance matrix using i.i.d. samples assumed to be target-free. Moving from this paradigm, we exploit the fact that the interference (clutter and/orjammers) lies in a union of low-dimensional subspaces. Hence, the matrix of concatenated samples can be modeled as a sum of low-rank matrices (union of subspaces containing interferences) plus a sparse matrix times a dictionary of steering-vectors (rep-resenting the targets contribution). Recovering this factorization from the observation matrix allows to build detection maps for each sample. To perform such recovery, we propose a generalized version of the robust subspace recovery via bi-sparsity pursuitalgorithm [1]. Experimental results on a real data set highlight the interest of the approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Direction-of-Arrival Estimation for Uniform Rectangular Array: A Multilinear Projection Approach.\n \n \n \n \n\n\n \n Cao, M.; Mao, X.; Long, X.; and Huang, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1237-1241, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Direction-of-ArrivalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553326,\n  author = {M. Cao and X. Mao and X. Long and L. Huang},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Direction-of-Arrival Estimation for Uniform Rectangular Array: A Multilinear Projection Approach},\n  year = {2018},\n  pages = {1237-1241},\n  abstract = {In this paper, elevation and azimuth estimation with uniform rectangular array (URA) is addressed. Since the temporal samples received by the URA could be written into a tensorial form, we introduce the multilinear projection for developing a direction-of-arrival (DOA) estimator. In the noiseless condition, the multilinear projector is orthogonal to the steering matrix of the URA. Thus the proposed DOA estimator is designed to find minimal points of the inner product of the steering vector and the multilinear projector. Based on the multilinear algebraic framework, the proposed approach provides a better subspace estimate than that of the matrix-based subspace. Simulation results are provided to demonstrate the effectiveness of the proposed method.},\n  keywords = {array signal processing;direction-of-arrival estimation;matrix algebra;vectors;direction-of-arrival estimator;noiseless condition;multilinear projector;steering matrix;URA;DOA estimator;steering vector;multilinear algebraic framework;subspace estimate;matrix-based subspace;direction-of-arrival estimation;uniform rectangular array;multilinear projection approach;azimuth estimation;temporal samples;tensorial form;Direction-of-arrival estimation;Tensile stress;Estimation;Azimuth;Multiple signal classification;Array signal processing;Array signal processing;direction-of-arrival estimation;multilinear algebra;tensor decomposition;uniform rectangular array},\n  doi = {10.23919/EUSIPCO.2018.8553326},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570431114.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, elevation and azimuth estimation with uniform rectangular array (URA) is addressed. Since the temporal samples received by the URA could be written into a tensorial form, we introduce the multilinear projection for developing a direction-of-arrival (DOA) estimator. In the noiseless condition, the multilinear projector is orthogonal to the steering matrix of the URA. Thus the proposed DOA estimator is designed to find minimal points of the inner product of the steering vector and the multilinear projector. Based on the multilinear algebraic framework, the proposed approach provides a better subspace estimate than that of the matrix-based subspace. Simulation results are provided to demonstrate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Kinect V2 Registration Method Using Color and Deep Geometry Descriptors.\n \n \n \n \n\n\n \n Gao, Y.; Michels, T.; and Koch, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 201-205, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553327,\n  author = {Y. Gao and T. Michels and R. Koch},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Kinect V2 Registration Method Using Color and Deep Geometry Descriptors},\n  year = {2018},\n  pages = {201-205},\n  abstract = {The novel view synthesis for traditional sparse light field camera arrays generally relies on an accurate depth approximation for a scene. To this end, it is preferable for such camera-array systems to integrate multiple depth cameras (e.g. Kinect V2), thereby requiring a precise registration for the integrated depth sensors. Methods based on special calibration objects have been proposed to solve the multi-Kinect V2 registration problem by using the prebuilt geometric relationships of several easily-detectable common point pairs. However, for registration tasks incapable of knowing these precise geometric relationships, this kind of method is prone to fail. To overcome this limitation, a novel Kinect V2 registration approach in a coarse-to-fine framework is proposed in this paper. Specifically, both local color and geometry information is extracted directly from a static scene to recover a rigid transformation from one Kinect V2 to the other. Besides, a 3D convolutional neural network (ConvNet), i.e. 3DMatch, is utilized to describe local geometries. Experimental results show that the proposed Kinect V2 registration method using both color and deep geometry descriptors outperforms the other coarse-to-fine baseline approaches.},\n  keywords = {calibration;image colour analysis;image motion analysis;image registration;Kinect V2 registration approach;sparse light field camera arrays;static scene;geometry information;local color;registration tasks;easily-detectable common point pairs;prebuilt geometric relationships;multiKinect V2 registration problem;integrated depth sensors;multiple depth cameras;camera-array systems;accurate depth approximation;deep geometry descriptors;Cameras;Three-dimensional displays;Calibration;Estimation;Color;Geometry;Sensors},\n  doi = {10.23919/EUSIPCO.2018.8553327},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437421.pdf},\n}\n\n
\n
\n\n\n
\n The novel view synthesis for traditional sparse light field camera arrays generally relies on an accurate depth approximation for a scene. To this end, it is preferable for such camera-array systems to integrate multiple depth cameras (e.g. Kinect V2), thereby requiring a precise registration for the integrated depth sensors. Methods based on special calibration objects have been proposed to solve the multi-Kinect V2 registration problem by using the prebuilt geometric relationships of several easily-detectable common point pairs. However, for registration tasks incapable of knowing these precise geometric relationships, this kind of method is prone to fail. To overcome this limitation, a novel Kinect V2 registration approach in a coarse-to-fine framework is proposed in this paper. Specifically, both local color and geometry information is extracted directly from a static scene to recover a rigid transformation from one Kinect V2 to the other. Besides, a 3D convolutional neural network (ConvNet), i.e. 3DMatch, is utilized to describe local geometries. Experimental results show that the proposed Kinect V2 registration method using both color and deep geometry descriptors outperforms the other coarse-to-fine baseline approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimum Detection for a Class of Stationary Meteorological Radars.\n \n \n \n \n\n\n \n Darío Almeida García, F.; Miguel Miranda, M. A.; and silveira Santos Filho , J. C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2258-2262, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OptimumPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553329,\n  author = {F. {Darío Almeida García} and M. A. {Miguel Miranda} and J. C. {silveira Santos Filho}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimum Detection for a Class of Stationary Meteorological Radars},\n  year = {2018},\n  pages = {2258-2262},\n  abstract = {Recently, an innovative low-cost approach for the construction of meteorological radars has been introduced by exploiting the correlation between the received signals from two fixed wide-beam antennas. Yet, it was then found that a very large amount of signal samples would be required to ensure a satisfactory performance of the proposed radar. On the other hand, it was also envisaged that such a problem could be circumvented by the use of more than two antennas. This work is a first step in this direction, extending the original radar proposal from two to an arbitrary number of antennas. In addition to designing an optimum detector for the new radar, we assess its performance by deriving asymptotic, closed-form expressions for the resulting detection and false-alarm probabilities. As a term of comparison, we also design and analyze a suboptimal detection scheme based on the traditional phased-array approach. Numerical examples are given to validate the provided analysis and to illustrate the performance agin achieved from the use of additional antennas.},\n  keywords = {meteorological radar;phased array radar;probability;radar antennas;radar detection;radar signal processing;signal detection;signal sampling;stationary meteorological radars;phased-array approach;suboptimal detection scheme;false-alarm probabilities;closed-form expressions;optimum detector;original radar proposal;wide-beam antennas;low-cost approach;optimum detection;Radar antennas;Meteorological radar;Receiving antennas;Correlation;Antenna arrays;Correlation;meteorological radars;optimum detection;stationary antennas},\n  doi = {10.23919/EUSIPCO.2018.8553329},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436801.pdf},\n}\n\n
\n
\n\n\n
\n Recently, an innovative low-cost approach for the construction of meteorological radars has been introduced by exploiting the correlation between the received signals from two fixed wide-beam antennas. Yet, it was then found that a very large amount of signal samples would be required to ensure a satisfactory performance of the proposed radar. On the other hand, it was also envisaged that such a problem could be circumvented by the use of more than two antennas. This work is a first step in this direction, extending the original radar proposal from two to an arbitrary number of antennas. In addition to designing an optimum detector for the new radar, we assess its performance by deriving asymptotic, closed-form expressions for the resulting detection and false-alarm probabilities. As a term of comparison, we also design and analyze a suboptimal detection scheme based on the traditional phased-array approach. Numerical examples are given to validate the provided analysis and to illustrate the performance agin achieved from the use of additional antennas.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Online Expectation-Maximization Algorithm for Tracking Acoustic Sources in Multi-Microphone Devices During Music Playback.\n \n \n \n \n\n\n \n Giacobello, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1547-1551, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553331,\n  author = {D. Giacobello},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Online Expectation-Maximization Algorithm for Tracking Acoustic Sources in Multi-Microphone Devices During Music Playback},\n  year = {2018},\n  pages = {1547-1551},\n  abstract = {In this paper, we propose an expectation-maximization algorithm to perform online tracking of moving sources around multi-microphone devices. We are particularly targeting the application scenario of distant-talking control of a music playback device. The goal is to perform spatial tracking of the moving sources and to estimate the probability that each of these sources is active. In particular, we use the expectation-maximization algorithm to capture the statistical behavior of the feature space representing the ensemble of sources as a Gaussian mixture model, assigning each Gaussian component to an individual acoustic source. The features used exploit a wide range of information on the sources behavior making the system robust to noise, reverberation, and music playback. We then differentiate between desired and interfering sources. The spatial information and activity level is then determined for each desired source. Experimental evaluation of a real acoustic source tracking problem with and without music playback shows promising results for the proposed approach.},\n  keywords = {acoustic signal processing;expectation-maximisation algorithm;Gaussian processes;microphone arrays;mixture models;probability;reverberation;signal representation;statistical behavior;Gaussian component;feature space;probability estimation;spatial moving source tracking;distant-talking control;online moving source tracking;acoustic source tracking problem;activity level;spatial information;interfering sources;sources behavior;individual acoustic source;Gaussian mixture model;music playback device;application scenario;multimicrophone devices;online expectation-maximization algorithm;Acoustics;Microphones;Adaptation models;Feature extraction;Acoustic measurements;Signal processing algorithms;Probability},\n  doi = {10.23919/EUSIPCO.2018.8553331},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437313.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an expectation-maximization algorithm to perform online tracking of moving sources around multi-microphone devices. We are particularly targeting the application scenario of distant-talking control of a music playback device. The goal is to perform spatial tracking of the moving sources and to estimate the probability that each of these sources is active. In particular, we use the expectation-maximization algorithm to capture the statistical behavior of the feature space representing the ensemble of sources as a Gaussian mixture model, assigning each Gaussian component to an individual acoustic source. The features used exploit a wide range of information on the sources behavior making the system robust to noise, reverberation, and music playback. We then differentiate between desired and interfering sources. The spatial information and activity level is then determined for each desired source. Experimental evaluation of a real acoustic source tracking problem with and without music playback shows promising results for the proposed approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Deep Learning MI - EEG Classification Model for BCIs.\n \n \n \n\n\n \n Dose, H.; Møller, J. S.; Puthusserypady, S.; and Iversen, H. K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1676-1679, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553332,\n  author = {H. Dose and J. S. Møller and S. Puthusserypady and H. K. Iversen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Deep Learning MI - EEG Classification Model for BCIs},\n  year = {2018},\n  pages = {1676-1679},\n  abstract = {The following topics are dealt with: learning (artificial intelligence); feature extraction; optimisation; acoustic signal processing; neural nets; iterative methods; image reconstruction; medical signal processing; matrix algebra; image classification.},\n  keywords = {acoustic signal processing;feature extraction;learning (artificial intelligence);neural nets;optimisation;learning (artificial intelligence);feature extraction;optimisation;acoustic signal processing;neural nets;iterative methods;image reconstruction;medical signal processing;matrix algebra;image classification;Brain modeling;Electroencephalography;Training;Convolution;Data models;Task analysis;Adaptation models},\n  doi = {10.23919/EUSIPCO.2018.8553332},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The following topics are dealt with: learning (artificial intelligence); feature extraction; optimisation; acoustic signal processing; neural nets; iterative methods; image reconstruction; medical signal processing; matrix algebra; image classification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Speech Pronunciation Correction with Dynamic Frequency Warping-Based Spectral Conversion.\n \n \n \n \n\n\n \n Hojo, N.; Kameoka, H.; Tanaka, K.; and Kaneko, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2310-2314, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553333,\n  author = {N. Hojo and H. Kameoka and K. Tanaka and T. Kaneko},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic Speech Pronunciation Correction with Dynamic Frequency Warping-Based Spectral Conversion},\n  year = {2018},\n  pages = {2310-2314},\n  abstract = {This paper deals with the problem of pronunciation conversion (PC) task, a problem to reduce non-native accents in speech while preserving the original speaker identity. Although PC can be regarded as a special class of voice conversion (VC), a straightforward application of conventional VC methods to a PC task would not be successful since with VC the original speaker identity of input speech may also change. This problem is due to the fact that two functions, namely an accent conversion function and a speaker similarity conversion function, are entangled in an acoustic feature mapping function. This paper proposes dynamic frequency warping (DFW)-based spectral conversion to solve this problem. The proposed DFW-based PC converts the pronunciation of input speech by relocating the formants to the corresponding positions in which native speakers tend to locate their formants. We expect the speaker identity is preserved because other factors such as formant powers are kept unchanged. in a low frequency domain evaluation results confirmed that DFW-based PC with spectral residual modeling showed higher speaker similarity to original speaker while showing a comparable effect of reducing foreign accents to a conventional GMM-based VC method.},\n  keywords = {Gaussian processes;speaker recognition;speech synthesis;automatic speech pronunciation;dynamic frequency warping-based spectral conversion;pronunciation conversion task;nonnative accents;original speaker identity;voice conversion;straightforward application;conventional VC methods;PC task;input speech;accent conversion function;speaker similarity conversion function;acoustic feature mapping function;DFW-based PC;formants;native speakers;low frequency domain evaluation results;spectral residual modeling;foreign accents;conventional GMM-based VC method;speaker similarity;Frequency conversion;Feature extraction;Discrete cosine transforms;Training;Europe;Signal processing;Task analysis;Accent conversion;dynamic frequency warping;voice conversion},\n  doi = {10.23919/EUSIPCO.2018.8553333},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437015.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the problem of pronunciation conversion (PC) task, a problem to reduce non-native accents in speech while preserving the original speaker identity. Although PC can be regarded as a special class of voice conversion (VC), a straightforward application of conventional VC methods to a PC task would not be successful since with VC the original speaker identity of input speech may also change. This problem is due to the fact that two functions, namely an accent conversion function and a speaker similarity conversion function, are entangled in an acoustic feature mapping function. This paper proposes dynamic frequency warping (DFW)-based spectral conversion to solve this problem. The proposed DFW-based PC converts the pronunciation of input speech by relocating the formants to the corresponding positions in which native speakers tend to locate their formants. We expect the speaker identity is preserved because other factors such as formant powers are kept unchanged. in a low frequency domain evaluation results confirmed that DFW-based PC with spectral residual modeling showed higher speaker similarity to original speaker while showing a comparable effect of reducing foreign accents to a conventional GMM-based VC method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Satellite Cycle-Slip Detection and Exclusion Using the Noise Subspace of Residual Dynamics.\n \n \n \n \n\n\n \n Riba, J.; De Cabrera, F.; and Juan, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2280-2284, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-SatellitePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553334,\n  author = {J. Riba and F. {De Cabrera} and J. Juan},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-Satellite Cycle-Slip Detection and Exclusion Using the Noise Subspace of Residual Dynamics},\n  year = {2018},\n  pages = {2280-2284},\n  abstract = {Real-time detection of cycle-slips on undifferenced carrier-phase measurements is an important task to properly exclude wrong phase trackers from precise positioning algorithms. The detection is especially challenging in high-dynamic mobile scenarios, where traditional approaches (as those based on single-channel polynomial fitting) may easily lead to false positives. Using a multi-channel formulation of the problem, the proposed technique takes benefit of the available data redundancy (high number of tracked satellites) in order to ameliorate the false positives. This robustness is accomplished by adaptively estimating the orthogonal subspace spanned by the polynomial time-varying residuals obtained from all available channels (treated as a vector process), and using that subspace to form efficient channel combinations with cancelled satellite-receiver dynamics. The main advantage of the multi-channel approach is that wrong measurements can be discarded without needing any positioning estimate nor phase-ambiguity solver, thus improving the accuracy, reliability and integrity of positioning. The performance improvement is shown by means of theoretical analysis and computer simulations.},\n  keywords = {adaptive estimation;phase measurement;polynomials;radio receivers;satellite communication;wireless channels;multisatellite cycle-slip detection;noise subspace;residual dynamics;real-time detection;undifferenced carrier-phase measurements;wrong phase trackers;precise positioning algorithms;high-dynamic mobile scenarios;false positives;multichannel formulation;tracked satellites;orthogonal subspace;polynomial time-varying residuals;efficient channel combinations;multichannel approach;positioning estimate;phase-ambiguity solver;reliability;data redundancy;satellite-receiver dynamics;Receivers;Satellites;Noise measurement;Covariance matrices;Position measurement;Phase measurement;Clocks;Cycle-Slips;SVD;MIMO;GLRT;GNSS},\n  doi = {10.23919/EUSIPCO.2018.8553334},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435891.pdf},\n}\n\n
\n
\n\n\n
\n Real-time detection of cycle-slips on undifferenced carrier-phase measurements is an important task to properly exclude wrong phase trackers from precise positioning algorithms. The detection is especially challenging in high-dynamic mobile scenarios, where traditional approaches (as those based on single-channel polynomial fitting) may easily lead to false positives. Using a multi-channel formulation of the problem, the proposed technique takes benefit of the available data redundancy (high number of tracked satellites) in order to ameliorate the false positives. This robustness is accomplished by adaptively estimating the orthogonal subspace spanned by the polynomial time-varying residuals obtained from all available channels (treated as a vector process), and using that subspace to form efficient channel combinations with cancelled satellite-receiver dynamics. The main advantage of the multi-channel approach is that wrong measurements can be discarded without needing any positioning estimate nor phase-ambiguity solver, thus improving the accuracy, reliability and integrity of positioning. The performance improvement is shown by means of theoretical analysis and computer simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Inference with Error Variable Splitting and Sparsity Enforcing Priors for Linear Inverse Problems.\n \n \n \n \n\n\n \n Mohammad-Djafari, A.; Dumitru, M.; Chapdelaine, C.; and Gac, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 440-444, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553335,\n  author = {A. Mohammad-Djafari and M. Dumitru and C. Chapdelaine and N. Gac},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Inference with Error Variable Splitting and Sparsity Enforcing Priors for Linear Inverse Problems},\n  year = {2018},\n  pages = {440-444},\n  abstract = {Regularization and Bayesian inference based methods have been successfully applied for linear inverse problems. In these methods, often simple Gaussian or Poisson models for the forward model errors have been considered. In this work, we use variable splitting for the errors to model different sources of errors and their possible non-stationarity or impulsive nature using Student-t or other heavy tailed distributions. Also, as a prior model, a sparsity enforcing hierarchical model of Infinite Gaussian Mixture model is introduced. With these prior models, we obtain a complete Bayesian inference framework which can efficiently be implemented for any linear inverse problem. Interestingly, many recent regularization-based algorithms such as Alternating Direction Method of Multipliers (ADMM) as well as more classical Bayesian based methods such as Sparse Bayesian Learning (SBL) are obtained as particular cases. One advantage of the Bayesian approach is the possibility to estimate, jointly with the reconstruction, the hyper-parameters such as the regularization parameter, thus the capability of proposing unsupervised methods. Examples of implementation of the proposed method in different signal and image processing such as deconvolution in mass spectrometry, estimation of periodic components estimation in biological signals and computed tomography are mentioned and referenced.},\n  keywords = {Bayes methods;computerised tomography;deconvolution;Gaussian processes;inverse problems;learning (artificial intelligence);mass spectroscopy;mixture models;error variable splitting;sparsity enforcing priors;linear inverse problem;forward model errors;hierarchical model;unsupervised methods;infinite Gaussian mixture model;Bayesian inference framework;sparse Bayesian learning;regularization-based algorithms;signal processing;image processing;biological signals;computed tomography;mass spectrometry;alternating direction method of multipliers;ADMM;SBL;Bayes methods;Inverse problems;Signal processing algorithms;Biological system modeling;Hafnium;Optimization;Signal processing;Variable splitting;Bayesian inference;Sparsity enforcing;Inverse problems;Approximate Bayesian computation (ABC)},\n  doi = {10.23919/EUSIPCO.2018.8553335},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432012.pdf},\n}\n\n
\n
\n\n\n
\n Regularization and Bayesian inference based methods have been successfully applied for linear inverse problems. In these methods, often simple Gaussian or Poisson models for the forward model errors have been considered. In this work, we use variable splitting for the errors to model different sources of errors and their possible non-stationarity or impulsive nature using Student-t or other heavy tailed distributions. Also, as a prior model, a sparsity enforcing hierarchical model of Infinite Gaussian Mixture model is introduced. With these prior models, we obtain a complete Bayesian inference framework which can efficiently be implemented for any linear inverse problem. Interestingly, many recent regularization-based algorithms such as Alternating Direction Method of Multipliers (ADMM) as well as more classical Bayesian based methods such as Sparse Bayesian Learning (SBL) are obtained as particular cases. One advantage of the Bayesian approach is the possibility to estimate, jointly with the reconstruction, the hyper-parameters such as the regularization parameter, thus the capability of proposing unsupervised methods. Examples of implementation of the proposed method in different signal and image processing such as deconvolution in mass spectrometry, estimation of periodic components estimation in biological signals and computed tomography are mentioned and referenced.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture.\n \n \n \n \n\n\n \n Li, Y.; Scrofani, G.; Sjöström, M.; and Martinez-Corral, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 206-210, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Area-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553336,\n  author = {Y. Li and G. Scrofani and M. Sjöström and M. Martinez-Corral},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Area-Based Depth Estimation for Monochromatic Feature-Sparse Orthographic Capture},\n  year = {2018},\n  pages = {206-210},\n  abstract = {With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depth estimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which rely on the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of an arbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied to find the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.},\n  keywords = {image matching;image reconstruction;image texture;stereo image processing;captured images;stereo matching;light field technology;monochromatic feature-sparse orthographic capture;microscopic fluorescence imaging;feature information;depth maps;dense depth map;area-based depth estimation approach;arbitrarily chosen central image;sub-aperture image;texture information;orthographic sub-aperture images;Estimation;Three-dimensional displays;Microscopy;Image color analysis;Feature extraction;Europe;Signal processing;Depth estimation;integral imaging;orthographic views;depth from focus},\n  doi = {10.23919/EUSIPCO.2018.8553336},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437816.pdf},\n}\n\n
\n
\n\n\n
\n With the rapid development of light field technology, depth estimation has been highlighted as one of the critical problems in the field, and a number of approaches have been proposed to extract the depth of the scene. However, depth estimation by stereo matching becomes difficult and unreliable when the captured images lack both color and feature information. In this paper, we propose a scheme that extracts robust depth from monochromatic, feature-sparse scenes recorded in orthographic sub-aperture images. Unlike approaches which rely on the rich color and texture information across the sub-aperture views, our approach is based on depth from focus techniques. First, we superimpose shifted sub-aperture images on top of an arbitrarily chosen central image. To focus on different depths, the shift amount is varied based on the micro-lens array properties. Next, an area-based depth estimation approach is applied to find the best match among the focal stack and generate the dense depth map. This process is repeated for each sub-aperture image. Finally, occlusions are handled by merging depth maps generated from different central images followed by a voting process. Results show that the proposed scheme is more suitable than conventional depth estimation approaches in the context of orthographic captures that have insufficient color and feature information, such as microscopic fluorescence imaging.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive Video Encoding for Time-Constrained Compression and Delivery.\n \n \n \n \n\n\n \n Dias, A. S.; Blasi, S.; Mrak, M.; Huang, S.; and Izquierdo, E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 146-150, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553337,\n  author = {A. S. Dias and S. Blasi and M. Mrak and S. Huang and E. Izquierdo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive Video Encoding for Time-Constrained Compression and Delivery},\n  year = {2018},\n  pages = {146-150},\n  abstract = {Some applications require video content to be encoded and uploaded to a remote destination in a fixed time. This paper proposes a novel approach to address this challenge, based on online adaptation of compression parameters to control both encoding and uploading time. In particular, an algorithm to accurately predict the encoding time and bit-rate resulting from using a given quantisation parameter to encode the next frames in the sequence is proposed. This is used to drive decisions during the encoding process, to maintain the cumulative time needed to encode and transmit the video sequence within the constraints. Experimental evaluation shows that when using the proposed method, time constraints can be met with high accuracy under a variety of different target times and bandwidth conditions.},\n  keywords = {data compression;image sequences;video coding;adaptive video encoding;time-constrained compression;video content;remote destination;fixed time;online adaptation;compression parameters;uploading time;encoding time;bit-rate resulting;encoding process;cumulative time;video sequence;time constraints;bandwidth conditions;quantisation parameter;Encoding;Quantization (signal);Complexity theory;Signal processing algorithms;Time factors;Video coding;Europe;HEVC;rate-control;encoding time estimation},\n  doi = {10.23919/EUSIPCO.2018.8553337},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438256.pdf},\n}\n\n
\n
\n\n\n
\n Some applications require video content to be encoded and uploaded to a remote destination in a fixed time. This paper proposes a novel approach to address this challenge, based on online adaptation of compression parameters to control both encoding and uploading time. In particular, an algorithm to accurately predict the encoding time and bit-rate resulting from using a given quantisation parameter to encode the next frames in the sequence is proposed. This is used to drive decisions during the encoding process, to maintain the cumulative time needed to encode and transmit the video sequence within the constraints. Experimental evaluation shows that when using the proposed method, time constraints can be met with high accuracy under a variety of different target times and bandwidth conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Integrating Denoising Autoencoder and Vector Taylor Series with Auditory Masking for Speech Recognition in Noisy Conditions.\n \n \n \n \n\n\n \n Biswajit Das, A.; and Panda, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2305-2309, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IntegratingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553339,\n  author = {A. {Biswajit Das} and A. Panda},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Integrating Denoising Autoencoder and Vector Taylor Series with Auditory Masking for Speech Recognition in Noisy Conditions},\n  year = {2018},\n  pages = {2305-2309},\n  abstract = {We propose a new front-end feature compensation technique to improve the performance of Automatic Speech Recognition (ASR) systems in noisy environments. First, a Time Delay Neural Network (TDNN) based Denoising Autoencoder (DAE) is considered to compensate the noisy features. The DAE provides good gain in performance when it has been trained using the noise present in the test utterances (“seen” conditions). However, if the noise present in the test utterance is different to what was used in the training of the DAE (“un-seen” conditions), then the performance degrades to a great extent. To improve the ASR performance in such unseen conditions, a model compensation technique, namely the Vector Taylor Series with Auditory Masking (VTS-AM) is used. We propose a new Signal-to-Noise Ratio (SNR) based measure, which can reliably choose the type of compensation to be used for best performance gain. We show that the proposed technique improves the ASR performance significantly on noise corrupted TIMIT and Librispeech databases.},\n  keywords = {neural nets;signal denoising;speech recognition;automatic speech recognition systems;time delay neural network;denoising autoencoder;vector Taylor series;auditory masking;signal-to-noise ratio-based measurement;front-end feature compensation technique;noisy conditions;performance gain;model compensation technique;unseen conditions;ASR performance;test utterance;noisy features;DAE;noisy environments;Signal to noise ratio;Speech recognition;Noise measurement;Training;Databases;Psychoacoustic models;Taylor series;Noise robust speech recognition;Auditory masking;Vector Taylor series;Time delay neural network;Denoising autoencoder},\n  doi = {10.23919/EUSIPCO.2018.8553339},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434585.pdf},\n}\n\n
\n
\n\n\n
\n We propose a new front-end feature compensation technique to improve the performance of Automatic Speech Recognition (ASR) systems in noisy environments. First, a Time Delay Neural Network (TDNN) based Denoising Autoencoder (DAE) is considered to compensate the noisy features. The DAE provides good gain in performance when it has been trained using the noise present in the test utterances (“seen” conditions). However, if the noise present in the test utterance is different to what was used in the training of the DAE (“un-seen” conditions), then the performance degrades to a great extent. To improve the ASR performance in such unseen conditions, a model compensation technique, namely the Vector Taylor Series with Auditory Masking (VTS-AM) is used. We propose a new Signal-to-Noise Ratio (SNR) based measure, which can reliably choose the type of compensation to be used for best performance gain. We show that the proposed technique improves the ASR performance significantly on noise corrupted TIMIT and Librispeech databases.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Efficient Machine Learning-Based Fall Detection Algorithm using Local Binary Features.\n \n \n \n \n\n\n \n Saleh, M.; and Le Bouquin Jeannès, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 667-671, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553340,\n  author = {M. Saleh and R. {Le Bouquin Jeannès}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Efficient Machine Learning-Based Fall Detection Algorithm using Local Binary Features},\n  year = {2018},\n  pages = {667-671},\n  abstract = {According to the world health organization, millions of elderly suffer from falls every year. These falls are one of the major causes of death worldwide. As a rapid medical intervention would considerably decrease the serious consequences of such falls, automatic fall detection systems for elderly has become a necessity. In this paper, an efficient machine learning-based fall detection algorithm is proposed. Thanks to the proposed local binary features, this algorithm shows a high accuracy exceeding 99% when tested on a large dataset. In addition, it enjoys an attractive property that the computational cost of decision-making is independent from the complexity of the trained machine. Thus, the proposed algorithm overcomes a critical challenge of designing accurate yet low-cost solutions for wearable fall detectors. The aforementioned property enables implementing autonomous, low-power consumption wearable fall detectors.},\n  keywords = {decision making;geriatrics;learning (artificial intelligence);medical computing;local binary features;world health organization;rapid medical intervention;automatic fall detection systems;efficient machine learning-based fall detection algorithm;trained machine;low-power consumption wearable fall detectors;Acceleration;Feature extraction;Machine learning algorithms;Detection algorithms;Signal processing algorithms;Detectors;Senior citizens;fall detection;binary features;local features;machine learning;elderly},\n  doi = {10.23919/EUSIPCO.2018.8553340},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437716.pdf},\n}\n\n
\n
\n\n\n
\n According to the world health organization, millions of elderly suffer from falls every year. These falls are one of the major causes of death worldwide. As a rapid medical intervention would considerably decrease the serious consequences of such falls, automatic fall detection systems for elderly has become a necessity. In this paper, an efficient machine learning-based fall detection algorithm is proposed. Thanks to the proposed local binary features, this algorithm shows a high accuracy exceeding 99% when tested on a large dataset. In addition, it enjoys an attractive property that the computational cost of decision-making is independent from the complexity of the trained machine. Thus, the proposed algorithm overcomes a critical challenge of designing accurate yet low-cost solutions for wearable fall detectors. The aforementioned property enables implementing autonomous, low-power consumption wearable fall detectors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Piano Legato-Pedal Onset Detection Based on a Sympathetic Resonance Measure.\n \n \n \n \n\n\n \n Liang, B.; Fazekas, G.; and Sandler, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2484-2488, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PianoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553341,\n  author = {B. Liang and G. Fazekas and M. Sandler},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Piano Legato-Pedal Onset Detection Based on a Sympathetic Resonance Measure},\n  year = {2018},\n  pages = {2484-2488},\n  abstract = {In this paper, the problem of legato pedalling technique detection in polyphonic piano music is addressed. We propose a novel detection method exploiting the effect of sympathetic resonance which can be enhanced by a legato-pedal onset. To measure the effect, a specific piano transcription was performed using the templates of pre-recorded isolated notes, from which partial frequencies were estimated. This promotes the acquisition of residual components associated to the weak co-excitation of damped notes due to the legato pedalling technique. Features that represent the sympathetic resonance measure were extracted from residuals. We finally used a logistic regression classifier to determine the existence of legato-pedal onsets.},\n  keywords = {acoustic signal processing;audio signal processing;music;musical instruments;regression analysis;piano legato-pedal onset detection;sympathetic resonance measure;legato pedalling technique detection;polyphonic piano music;specific piano transcription;pre-recorded isolated notes;residual components;logistic regression classifier;Feature extraction;Resonant frequency;Music;Frequency estimation;Spectrogram;Europe;onset detection;sympathetic resonance;piano acoustics;piano pedalling techniques},\n  doi = {10.23919/EUSIPCO.2018.8553341},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437306.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, the problem of legato pedalling technique detection in polyphonic piano music is addressed. We propose a novel detection method exploiting the effect of sympathetic resonance which can be enhanced by a legato-pedal onset. To measure the effect, a specific piano transcription was performed using the templates of pre-recorded isolated notes, from which partial frequencies were estimated. This promotes the acquisition of residual components associated to the weak co-excitation of damped notes due to the legato pedalling technique. Features that represent the sympathetic resonance measure were extracted from residuals. We finally used a logistic regression classifier to determine the existence of legato-pedal onsets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adversarial Machine Learning Against Digital Watermarking.\n \n \n \n \n\n\n \n Quiring, E.; and Rieck, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 519-523, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AdversarialPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553343,\n  author = {E. Quiring and K. Rieck},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Adversarial Machine Learning Against Digital Watermarking},\n  year = {2018},\n  pages = {519-523},\n  abstract = {Machine learning and digital watermarking are independent research areas. Their methods, however, are vulnerable to similar attacks if operated in an adversarial environment. Recent research has thus started to bring both fields together by introducing a unified view for black-box attacks and defenses between learning and watermarking methods. In this paper, we extend this work and examine a novel black-box attack against digital watermarking based on concepts from adversarial learning. With a set of marked images, we let a neural network approximate the watermark detection and use this network to remove the watermark. The attack does not require knowledge of the watermarking scheme.},\n  keywords = {cryptography;image watermarking;learning (artificial intelligence);neural nets;black-box attacks;watermark detection;watermarking scheme;adversarial environment;digital watermarking methods;adversarial machine learning;neural network;Watermarking;Machine learning;Detectors;Neural networks;Signal processing;Computational modeling;Media;Digital Watermarking;Adversarial Examples},\n  doi = {10.23919/EUSIPCO.2018.8553343},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438020.pdf},\n}\n\n
\n
\n\n\n
\n Machine learning and digital watermarking are independent research areas. Their methods, however, are vulnerable to similar attacks if operated in an adversarial environment. Recent research has thus started to bring both fields together by introducing a unified view for black-box attacks and defenses between learning and watermarking methods. In this paper, we extend this work and examine a novel black-box attack against digital watermarking based on concepts from adversarial learning. With a set of marked images, we let a neural network approximate the watermark detection and use this network to remove the watermark. The attack does not require knowledge of the watermarking scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Characterizing 3D Shapes: A Complex Network-Based Approach.\n \n \n \n \n\n\n \n Eduardo Da Silva, G.; and Backes, A. R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1307-1311, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CharacterizingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553344,\n  author = {G. {Eduardo Da Silva} and A. R. Backes},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Characterizing 3D Shapes: A Complex Network-Based Approach},\n  year = {2018},\n  pages = {1307-1311},\n  abstract = {In the past few years, 3D models have emerged as the focus of many new applications. This recent popularity of 3D models has stimulated researchers to investigate the problems of 3D shape retrieval and to develop more efficient search and retrieval methods. Aiming to contribute to the recent literature, this work proposes a novel 3D shape characterization method using the complex network theory. By modeling a 3D shape object as a complex network we are able to effectively represent, characterize and analyze the object is terms of the topological properties of the complex network. Comparison with two other known methods for 3D model description, shape histograms and shape distributions, on a 3D models data set shows that the proposed technique is a feasible approach to efficiently perform 3D shape characterization and discrimination.},\n  keywords = {complex networks;image retrieval;network theory (graphs);shape recognition;solid modelling;3D shape retrieval;novel 3D shape characterization method;complex network theory;3D shape object;3D model description;shape histograms;shape distributions;complex network-based approach;3D models data set;3D shape discrimination;Three-dimensional displays;Solid modeling;Computational modeling;Shape;Complex networks;Histograms},\n  doi = {10.23919/EUSIPCO.2018.8553344},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570430371.pdf},\n}\n\n
\n
\n\n\n
\n In the past few years, 3D models have emerged as the focus of many new applications. This recent popularity of 3D models has stimulated researchers to investigate the problems of 3D shape retrieval and to develop more efficient search and retrieval methods. Aiming to contribute to the recent literature, this work proposes a novel 3D shape characterization method using the complex network theory. By modeling a 3D shape object as a complex network we are able to effectively represent, characterize and analyze the object is terms of the topological properties of the complex network. Comparison with two other known methods for 3D model description, shape histograms and shape distributions, on a 3D models data set shows that the proposed technique is a feasible approach to efficiently perform 3D shape characterization and discrimination.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparative Study on Spoken Language Identification Based on Deep Learning.\n \n \n \n \n\n\n \n Heracleous, P.; Takai, K.; Yasuda, K.; Mohammad, Y.; and Yoneyama, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2265-2269, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ComparativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553347,\n  author = {P. Heracleous and K. Takai and K. Yasuda and Y. Mohammad and A. Yoneyama},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Comparative Study on Spoken Language Identification Based on Deep Learning},\n  year = {2018},\n  pages = {2265-2269},\n  abstract = {Spoken language identification is the process by which the language in a spoken utterance is recognized automatically. Spoken language identification is commonly used in speech translation systems, in multi-lingual speech recognition, and in speaker diarization. In the current paper, spoken language identification based on deep learning (DL) and the i-vector paradigm is presented. Specifically, a comparative study is reported, consisting of experiments on language identification using deep neural networks (DNN) and convolutional neural networks (CNN). Also, the integration of the two methods into a complete system is investigated. Previous studies demonstrated the effectiveness of using DNN in spoken language identification. However, to date, the integration of CNN and i-vectors in language identification has not been investigated. The main advantage of using CNN is that fewer parameters are required compared to DNN. As a result, CNN is cheaper in terms of memory and the computational power needed. The proposed methods are evaluated on the NIST 2015 i-vector Machine Learning Challenge task for the recognition of 50 in-set languages. Using DNN, a 3.55% equal error rate (EER) was achieved. The EER when using CNN was 3.48%. When DNN and CNN systems were fused, an EER of 3.3% was obtained. The results are very promising, and they also show the effectiveness of using CNN and i-vectors in spoken language identification. The proposed methods are compared to a baseline method based on support vector machines (SVM) and they demonstrated significantly superior performance.},\n  keywords = {convolution;feedforward neural nets;learning (artificial intelligence);natural language processing;speaker recognition;speech processing;support vector machines;SVM;spoken utterance;support vector machines;equal error rate;machine learning;computational power;DNN;convolutional neural networks;deep neural networks;multilingual speech recognition;speaker diarization;speech translation systems;deep learning;CNN;spoken language identification;Support vector machines;NIST;Training data;Training;Task analysis;Speech recognition;Machine learning},\n  doi = {10.23919/EUSIPCO.2018.8553347},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435966.pdf},\n}\n\n
\n
\n\n\n
\n Spoken language identification is the process by which the language in a spoken utterance is recognized automatically. Spoken language identification is commonly used in speech translation systems, in multi-lingual speech recognition, and in speaker diarization. In the current paper, spoken language identification based on deep learning (DL) and the i-vector paradigm is presented. Specifically, a comparative study is reported, consisting of experiments on language identification using deep neural networks (DNN) and convolutional neural networks (CNN). Also, the integration of the two methods into a complete system is investigated. Previous studies demonstrated the effectiveness of using DNN in spoken language identification. However, to date, the integration of CNN and i-vectors in language identification has not been investigated. The main advantage of using CNN is that fewer parameters are required compared to DNN. As a result, CNN is cheaper in terms of memory and the computational power needed. The proposed methods are evaluated on the NIST 2015 i-vector Machine Learning Challenge task for the recognition of 50 in-set languages. Using DNN, a 3.55% equal error rate (EER) was achieved. The EER when using CNN was 3.48%. When DNN and CNN systems were fused, an EER of 3.3% was obtained. The results are very promising, and they also show the effectiveness of using CNN and i-vectors in spoken language identification. The proposed methods are compared to a baseline method based on support vector machines (SVM) and they demonstrated significantly superior performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reduced-Complexity Semi-Distributed Multi-Channel Multi-Frame MVDR Filter.\n \n \n \n \n\n\n \n Ranjbaryan, R.; Abutalebi, H. R.; and Doclo, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2095-2099, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Reduced-ComplexityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553349,\n  author = {R. Ranjbaryan and H. R. Abutalebi and S. Doclo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Reduced-Complexity Semi-Distributed Multi-Channel Multi-Frame MVDR Filter},\n  year = {2018},\n  pages = {2095-2099},\n  abstract = {In this paper, we propose a semi-distributed multichannel noise reduction method which considers the interframe correlation in the short-time Fourier transform (STFT) domain. Although exploiting the correlation of speech STFT coefficients enables to achieve impressive results, it also increases the computational complexity, especially in the case of a large number of frames and/or microphones. To address this issue in each time-frequency unit we propose to utilize the information of the current frame and a compressed signal from the previous frames in a distributed way. Simulation results show that the computational complexity can be substantially reduced by the proposed method without impairing speech quality.},\n  keywords = {filtering theory;Fourier transforms;speech enhancement;time-frequency analysis;reduced-complexity semidistributed multichannel multiframe MVDR filter;multichannel noise reduction method;interframe correlation;short-time Fourier;speech STFT coefficients;computational complexity;time-frequency unit;current frame;Correlation;Noise measurement;Microphones;Computational complexity;Noise reduction;Signal processing algorithms;Distortion},\n  doi = {10.23919/EUSIPCO.2018.8553349},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437695.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a semi-distributed multichannel noise reduction method which considers the interframe correlation in the short-time Fourier transform (STFT) domain. Although exploiting the correlation of speech STFT coefficients enables to achieve impressive results, it also increases the computational complexity, especially in the case of a large number of frames and/or microphones. To address this issue in each time-frequency unit we propose to utilize the information of the current frame and a compressed signal from the previous frames in a distributed way. Simulation results show that the computational complexity can be substantially reduced by the proposed method without impairing speech quality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sparse Time-Frequency-Frequency-Rate Representation for Multicomponent Nonstationary Signal Analysis.\n \n \n \n \n\n\n \n Zhang, W.; Fu, Y.; and Li, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 717-721, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SparsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553350,\n  author = {W. Zhang and Y. Fu and Y. Li},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sparse Time-Frequency-Frequency-Rate Representation for Multicomponent Nonstationary Signal Analysis},\n  year = {2018},\n  pages = {717-721},\n  abstract = {Though high resolution time-frequency representations (TFRs) are developed and provide satisfactory results for multicomponent nonstationary signals, extracting multiple ridges from the time-frequency (TF) plot to approximate the instantaneous frequencies (IFs) for intersected components is quite difficult. In this work, the sparse time-frequency-frequency-rate representation (STFFRR) is proposed by using the short-time sparse representation (STSR) with the chirp dictionary. The instantaneous frequency rate (IFRs) and IFs of signal components can be jointly estimated via the STFFRR. As there are permutations between the IF and IFR estimates of signal components at different instants, the local k-means clustering algorithm is applied for component linking. By employing the STFFRR, the intersected components in TF plot can be well separated and robust IF estimation can be obtained. Numerical results validate the effectiveness of the proposed method.},\n  keywords = {signal representation;signal resolution;source separation;time-frequency analysis;sparse time-frequency-frequency-rate representation;multicomponent nonstationary signal analysis;high resolution time-frequency representations;multicomponent nonstationary signals;time-frequency plot;intersected components;short-time sparse representation;instantaneous frequency rate;signal components;Chirp;Time-frequency analysis;Estimation;Signal resolution;Frequency estimation;Fourier transforms;Europe;multicomponent nonstationary signal;time-frequency-frequency-rate representation;short-time sparse representation;instantaneous frequency estimation;local k-means clustering algorithm},\n  doi = {10.23919/EUSIPCO.2018.8553350},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436326.pdf},\n}\n\n
\n
\n\n\n
\n Though high resolution time-frequency representations (TFRs) are developed and provide satisfactory results for multicomponent nonstationary signals, extracting multiple ridges from the time-frequency (TF) plot to approximate the instantaneous frequencies (IFs) for intersected components is quite difficult. In this work, the sparse time-frequency-frequency-rate representation (STFFRR) is proposed by using the short-time sparse representation (STSR) with the chirp dictionary. The instantaneous frequency rate (IFRs) and IFs of signal components can be jointly estimated via the STFFRR. As there are permutations between the IF and IFR estimates of signal components at different instants, the local k-means clustering algorithm is applied for component linking. By employing the STFFRR, the intersected components in TF plot can be well separated and robust IF estimation can be obtained. Numerical results validate the effectiveness of the proposed method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Beamforming and Echo Cancellation Combining QRD Based Multichannel AEC and MVDR for Reducing Noise and Non-Linear Echo.\n \n \n \n \n\n\n \n Cohen, A.; Barnov, A.; Markovich-Golan, S.; and Kroon, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 6-10, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553351,\n  author = {A. Cohen and A. Barnov and S. Markovich-Golan and P. Kroon},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Beamforming and Echo Cancellation Combining QRD Based Multichannel AEC and MVDR for Reducing Noise and Non-Linear Echo},\n  year = {2018},\n  pages = {6-10},\n  abstract = {The problems of echo and noise contaminating a desired talker signal in a communication or an entertainment device are considered. In the following, we propose a combined method comprising a linear echo-canceller followed by a weighted minimum variance distortionless response (MVDR) beamformer designed to reduce noise and echo residues. For the echo-canceller stage we use a fast-converging multichannel Q R decomposition (QRD)-recursive least squares (RLS) method. For the beamformer stage, we adopt and modify our recently proposed method of a fast-tracking QRD based MVDR beamformer [1]. We model that the residual echo is dominated by non-linearly distorted components which undergo the same echo paths as the non-distorted component. Thereby, the MVDR beamformer is designed to minimize a weighted sum of the powers of the noise and of the non-linear echo while maintaining the desired talker undistorted. The computational and memory complexities of the proposed algorithm are sufficiently low, making it appropriate for implementation in mobile devices. The performance of the proposed method is tested using real recordings from two commercial devices, a mobile-phone and a smart-speaker.},\n  keywords = {acoustic signal processing;array signal processing;echo suppression;least squares approximations;nonlinear echo;linear echo-canceller;weighted minimum variance distortionless response beamformer;echo residues;echo-canceller stage;beamformer stage;fast-tracking QRD;MVDR beamformer;residual echo;nonlinearly distorted components;echo paths;nondistorted component;joint beamforming;fast-converging multichannel Q R decomposition-recursive least squares method;Echo cancellers;Microphones;Covariance matrices;Nonlinear distortion;Loudspeakers;Array signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553351},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435799.pdf},\n}\n\n
\n
\n\n\n
\n The problems of echo and noise contaminating a desired talker signal in a communication or an entertainment device are considered. In the following, we propose a combined method comprising a linear echo-canceller followed by a weighted minimum variance distortionless response (MVDR) beamformer designed to reduce noise and echo residues. For the echo-canceller stage we use a fast-converging multichannel Q R decomposition (QRD)-recursive least squares (RLS) method. For the beamformer stage, we adopt and modify our recently proposed method of a fast-tracking QRD based MVDR beamformer [1]. We model that the residual echo is dominated by non-linearly distorted components which undergo the same echo paths as the non-distorted component. Thereby, the MVDR beamformer is designed to minimize a weighted sum of the powers of the noise and of the non-linear echo while maintaining the desired talker undistorted. The computational and memory complexities of the proposed algorithm are sufficiently low, making it appropriate for implementation in mobile devices. The performance of the proposed method is tested using real recordings from two commercial devices, a mobile-phone and a smart-speaker.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Colorblind-friendly Halftoning.\n \n \n \n \n\n\n \n Felix Yu, S. K.; Chan, Y.; Daniel Lun, P. K.; Jeffrey Chan, C. W.; and Kenneth Li, K. W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1457-1461, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Colorblind-friendlyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553352,\n  author = {S. K. {Felix Yu} and Y. Chan and P. K. {Daniel Lun} and C. W. {Jeffrey Chan} and K. W. {Kenneth Li}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Colorblind-friendly Halftoning},\n  year = {2018},\n  pages = {1457-1461},\n  abstract = {Most images are natural images and most of them are still delivered in printed form nowadays. Halftoning is a critical process for printing natural color images. However, conventional colorblind aids do not make use of the halftoning process directly to produce colorblind-friendly image prints. In this paper, a halftoning algorithm is proposed to reduce the color distortion of an image print in the view of a colorblind person and embed hints in the image print for a colorblind person to distinguish the confusing colors. To people with normal vision, the color halftone looks the same as the original image when it is viewed at a reasonable distance, which is not achievable when conventional techniques such as recoloring and pattern overlaying are used to produce a color print for the colorblind. Besides, no dedicated hardware is required to view the printed image.},\n  keywords = {handicapped aids;image colour analysis;image print;colorblind person;confusing colors;color halftone;color print;printed image;colorblind-friendly halftoning;natural images;printed form;critical process;natural color images;halftoning process;colorblind-friendly image prints;halftoning algorithm;color distortion;Image color analysis;Color;Distortion;Colored noise;Visualization;Printers;Europe;colorblind;color vision deficiency;halftoning;colorblind-friendly hardcopy;color separation},\n  doi = {10.23919/EUSIPCO.2018.8553352},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437091.pdf},\n}\n\n
\n
\n\n\n
\n Most images are natural images and most of them are still delivered in printed form nowadays. Halftoning is a critical process for printing natural color images. However, conventional colorblind aids do not make use of the halftoning process directly to produce colorblind-friendly image prints. In this paper, a halftoning algorithm is proposed to reduce the color distortion of an image print in the view of a colorblind person and embed hints in the image print for a colorblind person to distinguish the confusing colors. To people with normal vision, the color halftone looks the same as the original image when it is viewed at a reasonable distance, which is not achievable when conventional techniques such as recoloring and pattern overlaying are used to produce a color print for the colorblind. Besides, no dedicated hardware is required to view the printed image.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Obstructive Sleep Apnea (OSA) Classification Using Analysis of Breathing Sounds During Speech.\n \n \n \n \n\n\n \n Simply, R. M.; Dafna, E.; and Zigel, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1132-1136, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ObstructivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553353,\n  author = {R. M. Simply and E. Dafna and Y. Zigel},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Obstructive Sleep Apnea (OSA) Classification Using Analysis of Breathing Sounds During Speech},\n  year = {2018},\n  pages = {1132-1136},\n  abstract = {Obstructive sleep apnea (OSA) is a sleep disorder in which pharyngeal collapse during sleep, causes a complete or partial airway obstruction. OSA is common and can have severe impacts, but often remains unrecognized. In this study, we propose a novel method which able to detect OSA subjects while they are awake, by analyzing breathing sounds during speech. The hypothesis is that OSA is associated with anatomical and functional abnormalities of the upper airway, which in turn, affect the acoustic parameters of a natural breathing sound during speech. The proposed OSA detector is a fully automated system, which consists of three consecutive steps including: 1) locating breathing sounds during continuous speech, 2) extracting acoustic features that quantify the breathing properties, and 3) OSA/non-OSA classification based on the detected breathing sounds. Based on breathing sounds analysis alone (90 male subjects; 72 for training, 18 for validation), our system yields an encouraging results (accuracy of 76.5%) showing the potential of speech analysis to detect OSA. Such a system can be integrated with other non-contact OSA detectors to provide a reliable and OSA syndrome-screening tool.},\n  keywords = {feature extraction;medical disorders;medical signal processing;pneumodynamics;sleep;speech processing;sleep disorder;partial airway obstruction;OSA subjects;natural breathing sound;OSA detector;continuous speech;breathing properties;speech analysis;noncontact OSA detectors;reliable OSA;obstructive sleep apnea classification;OSA-nonOSA classification;breathing sound analysis;pharyngeal collapse;upper airway;acoustic feature extraction;OSA syndrome-screening tool;Feature extraction;Noise measurement;Training;Mel frequency cepstral coefficient;Signal processing;Sleep apnea;Obstructive sleep apnea (OSA);speech signals;breath signals;signal processing;machine learning},\n  doi = {10.23919/EUSIPCO.2018.8553353},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437038.pdf},\n}\n\n
\n
\n\n\n
\n Obstructive sleep apnea (OSA) is a sleep disorder in which pharyngeal collapse during sleep, causes a complete or partial airway obstruction. OSA is common and can have severe impacts, but often remains unrecognized. In this study, we propose a novel method which able to detect OSA subjects while they are awake, by analyzing breathing sounds during speech. The hypothesis is that OSA is associated with anatomical and functional abnormalities of the upper airway, which in turn, affect the acoustic parameters of a natural breathing sound during speech. The proposed OSA detector is a fully automated system, which consists of three consecutive steps including: 1) locating breathing sounds during continuous speech, 2) extracting acoustic features that quantify the breathing properties, and 3) OSA/non-OSA classification based on the detected breathing sounds. Based on breathing sounds analysis alone (90 male subjects; 72 for training, 18 for validation), our system yields an encouraging results (accuracy of 76.5%) showing the potential of speech analysis to detect OSA. Such a system can be integrated with other non-contact OSA detectors to provide a reliable and OSA syndrome-screening tool.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiscale DCNN Ensemble Applied to Human Activity Recognition Based on Wearable Sensors.\n \n \n \n \n\n\n \n Sena, J.; Santos, J. B.; and Schwartz, W. R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1202-1206, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MultiscalePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553354,\n  author = {J. Sena and J. B. Santos and W. R. Schwartz},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiscale DCNN Ensemble Applied to Human Activity Recognition Based on Wearable Sensors},\n  year = {2018},\n  pages = {1202-1206},\n  abstract = {Sensor-based Human Activity Recognition (HAR) provides valuable knowledge to many areas. Recently, wearable devices have gained space as a relevant source of data. However, there are two issues: large number of heterogeneous sensors available and the temporal nature of the sensor data. To handle those issues, we propose a multimodal approach that processes each sensor separately and, through an ensemble of Deep Convolution Neural Networks (DCNN), extracts information from multiple temporal scales of the sensor data. In this ensemble, we use a convolutional kernel with a different height for each DCNN. Considering that the number of rows in the sensor data reflects the data captured over time, each kernel height reflects a temporal scale from which we can extract patterns. Consequently, our approach is able to extract from simple movement patterns such as a wrist twist when picking up a spoon to complex movements such as the human gait. This multimodal and multitemporal approach outperforms previous state-of-the-art works in seven important datasets using two different protocols. In addition, we demonstrate that the use of our proposed set of kernels improves sensor-based HAR in another multi-kernel approach, the widely employed inception network.},\n  keywords = {learning (artificial intelligence);neural nets;sensors;sensor-based HAR;multiscale DCNN ensemble;multimodal approach;multiple temporal scales;Deep Convolution Neural Networks;sensor data;temporal nature;heterogeneous sensors;wearable devices;Sensor-based Human Activity Recognition;wearable sensors;Kernel;Convolution;Sensor phenomena and characterization;Feature extraction;Data mining;Sensor fusion;Human Activity Recognition;Wearable sensors;Multimodal data;CNN Ensemble;Multiscale Temporal Data},\n  doi = {10.23919/EUSIPCO.2018.8553354},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437521.pdf},\n}\n\n
\n
\n\n\n
\n Sensor-based Human Activity Recognition (HAR) provides valuable knowledge to many areas. Recently, wearable devices have gained space as a relevant source of data. However, there are two issues: large number of heterogeneous sensors available and the temporal nature of the sensor data. To handle those issues, we propose a multimodal approach that processes each sensor separately and, through an ensemble of Deep Convolution Neural Networks (DCNN), extracts information from multiple temporal scales of the sensor data. In this ensemble, we use a convolutional kernel with a different height for each DCNN. Considering that the number of rows in the sensor data reflects the data captured over time, each kernel height reflects a temporal scale from which we can extract patterns. Consequently, our approach is able to extract from simple movement patterns such as a wrist twist when picking up a spoon to complex movements such as the human gait. This multimodal and multitemporal approach outperforms previous state-of-the-art works in seven important datasets using two different protocols. In addition, we demonstrate that the use of our proposed set of kernels improves sensor-based HAR in another multi-kernel approach, the widely employed inception network.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Connectivity Modulations induced by Reaching Grasping Movements.\n \n \n \n\n\n \n Storti, S. F.; Galazzo, I. B.; Iacovelli, C.; Caliandro, P.; and Menegaz, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1392-1396, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553355,\n  author = {S. F. Storti and I. B. Galazzo and C. Iacovelli and P. Caliandro and G. Menegaz},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Connectivity Modulations induced by Reaching Grasping Movements},\n  year = {2018},\n  pages = {1392-1396},\n  abstract = {Functional neuroimaging enables the assessment of the brain function in both rest and active conditions. While traditional functional connectivity studies focus on determining distributed patterns of brain activity, the analysis of pair-wise correlations in the time series associated to brain regions allows a paradigm shift to graph theory making available a whole set of parameters for the analysis of the functional network. Then, the study of the properties of the networks as well as of their modulations can be performed in the space of the so-identified features potentially leading to the detection of condition-specific (static or dynamic) fingerprints. Following this guideline, this study is a first attempt to using graph-based measures for capturing task-specific signatures of a reach&grasp movement. The weighted clustering coefficient (CW), characteristic path length (SW) and small-worldness (SW) were considered and performance was assessed against classical measures (event-related (de)synchronization). Neurophysiological data were collected through high-density EEG and a stereophotogrammetric system was used for capturing the onset and end of the movement. Though not reaching statistical significance, these preliminary results witness the modulation of the function network due to reach&grasp and provide evidence in favour of the possibility of capturing such a modulation through graph-based properties. This would allow to shed light on the movement-induced reorganization of the network, which has a clear translational impact for the assessment of the recovery of patients after acute stroke.},\n  keywords = {biomechanics;electroencephalography;graph theory;medical signal processing;neurophysiology;time series;brain activity;pair-wise correlations;time series;brain regions;paradigm shift;functional network;graph-based measures;task-specific signatures;characteristic path length;function network;graph-based properties;stereophotogrammetric system;acute stroke;distributed patterns;traditional functional connectivity studies;active conditions;brain function;functional neuroimaging;reaching&grasping movements;connectivity modulations;movement-induced reorganization;high-density EEG;neurophysiological data;event-related desynchronization;event-related synchronization;small-worldness;weighted clustering coefficient;condition-specific fingerprints;graph theory;classical measures;Electroencephalography;Task analysis;Europe;Grasping;Modulation;Brain;High-density EEG;Brain connectivity;Motor function;Graph theory},\n  doi = {10.23919/EUSIPCO.2018.8553355},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Functional neuroimaging enables the assessment of the brain function in both rest and active conditions. While traditional functional connectivity studies focus on determining distributed patterns of brain activity, the analysis of pair-wise correlations in the time series associated to brain regions allows a paradigm shift to graph theory making available a whole set of parameters for the analysis of the functional network. Then, the study of the properties of the networks as well as of their modulations can be performed in the space of the so-identified features potentially leading to the detection of condition-specific (static or dynamic) fingerprints. Following this guideline, this study is a first attempt to using graph-based measures for capturing task-specific signatures of a reach&grasp movement. The weighted clustering coefficient (CW), characteristic path length (SW) and small-worldness (SW) were considered and performance was assessed against classical measures (event-related (de)synchronization). Neurophysiological data were collected through high-density EEG and a stereophotogrammetric system was used for capturing the onset and end of the movement. Though not reaching statistical significance, these preliminary results witness the modulation of the function network due to reach&grasp and provide evidence in favour of the possibility of capturing such a modulation through graph-based properties. This would allow to shed light on the movement-induced reorganization of the network, which has a clear translational impact for the assessment of the recovery of patients after acute stroke.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Network Utility Maximization for Adaptive Resource Allocation in DSL Systems.\n \n \n \n \n\n\n \n Verdyck, J.; Blondia, C.; and Moonen, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 787-791, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NetworkPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553357,\n  author = {J. Verdyck and C. Blondia and M. Moonen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Network Utility Maximization for Adaptive Resource Allocation in DSL Systems},\n  year = {2018},\n  pages = {787-791},\n  abstract = {When signal coordination techniques can not eliminate all crosstalk in a digital subscriber line (DSL) system, competition for data rate among different users is strong. In such scenarios, employing a static resource allocation fails to capitalize on the time dependent nature of the traffic carried by the DSL network. An alternative approach is adaptive resource allocation, consisting of dividing time into slots of short duration and using a different resource allocation in each slot. A cross-layer scheduler then decides on the resource allocation for each time slot by solving a network utility maximization (NUM) problem. For many DSL systems however, this NUM problem is non-convex and solving it is NP-hard. This paper presents a fast algorithm for finding a local solution to the NUM problem, which is referred to as NUM-DSB. The algorithm is able to handle many DSL deployment scenarios, and is applicable regardless of the utility function's properties.},\n  keywords = {communication complexity;concave programming;digital subscriber lines;resource allocation;telecommunication scheduling;telecommunication traffic;NUM problem;adaptive resource allocation;signal coordination techniques;digital subscriber line system;static resource allocation;time dependent nature;network utility maximization problem;DSL network systems;crosstalk elimination;cross-layer scheduler;nonconvex problem;NP-hard problem;DSB;Signal processing algorithms;Resource management;DSL;Approximation algorithms;Crosstalk;Nickel;Signal processing;DSL;Crosstalk;Cross layer design;Adaptive resource allocation;Minorize-maximization},\n  doi = {10.23919/EUSIPCO.2018.8553357},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437902.pdf},\n}\n\n
\n
\n\n\n
\n When signal coordination techniques can not eliminate all crosstalk in a digital subscriber line (DSL) system, competition for data rate among different users is strong. In such scenarios, employing a static resource allocation fails to capitalize on the time dependent nature of the traffic carried by the DSL network. An alternative approach is adaptive resource allocation, consisting of dividing time into slots of short duration and using a different resource allocation in each slot. A cross-layer scheduler then decides on the resource allocation for each time slot by solving a network utility maximization (NUM) problem. For many DSL systems however, this NUM problem is non-convex and solving it is NP-hard. This paper presents a fast algorithm for finding a local solution to the NUM problem, which is referred to as NUM-DSB. The algorithm is able to handle many DSL deployment scenarios, and is applicable regardless of the utility function's properties.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification of Volcano-Seismic Signals with Bayesian Neural Networks.\n \n \n \n \n\n\n \n Bueno, A.; Titos, M.; García, L.; Álvarez, I.; Ibañez, J.; and Benítez, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2295-2299, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553358,\n  author = {A. Bueno and M. Titos and L. García and I. Álvarez and J. Ibañez and C. Benítez},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification of Volcano-Seismic Signals with Bayesian Neural Networks},\n  year = {2018},\n  pages = {2295-2299},\n  abstract = {Whilst recent advances in the field of artificial neural networks could be applied to monitor volcanoes, its direct application remains a challenge given the complex geodynamics involved and the size of available datasets. However, Bayesian Neural Networks (BNNs) are probabilistic models that could classify and provide uncertainty estimation for transient seismic sources, even under data scarcity conditions. This research focuses on practical applications of BNNs to classify volcano-seismic signals using two variational learning approaches: Bayes by back-prop and Monte-Carlo dropout. We evaluate classification performance on seven classes of isolated events registered at “Volcán de Fuego”, Colima. Experimental results show an overall improvement for Monte-Carlo dropout approximation when compared to Bayes by backprop. Being at the intersection of Bayesian learning and geophysics, we demonstrate that BNNs provide uncertainty estimations when internal volcano-seismic sources change, which undoubtedly helps to enhance current early warning systems at volcanic observatories.},\n  keywords = {Bayes methods;belief networks;geophysical signal processing;geophysical techniques;learning (artificial intelligence);Monte Carlo methods;neural nets;seismology;signal classification;volcanology;uncertainty estimation;internal volcano-seismic sources change;volcano-seismic signals;Bayesian Neural Networks;artificial neural networks;complex geodynamics;BNNs;transient seismic sources;data scarcity conditions;practical applications;classification performance;Monte-Carlo dropout approximation;geophysics;Uncertainty;Volcanoes;Neural networks;Bayes methods;Mathematical model;Signal processing;Monitoring},\n  doi = {10.23919/EUSIPCO.2018.8553358},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437941.pdf},\n}\n\n
\n
\n\n\n
\n Whilst recent advances in the field of artificial neural networks could be applied to monitor volcanoes, its direct application remains a challenge given the complex geodynamics involved and the size of available datasets. However, Bayesian Neural Networks (BNNs) are probabilistic models that could classify and provide uncertainty estimation for transient seismic sources, even under data scarcity conditions. This research focuses on practical applications of BNNs to classify volcano-seismic signals using two variational learning approaches: Bayes by back-prop and Monte-Carlo dropout. We evaluate classification performance on seven classes of isolated events registered at “Volcán de Fuego”, Colima. Experimental results show an overall improvement for Monte-Carlo dropout approximation when compared to Bayes by backprop. Being at the intersection of Bayesian learning and geophysics, we demonstrate that BNNs provide uncertainty estimations when internal volcano-seismic sources change, which undoubtedly helps to enhance current early warning systems at volcanic observatories.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bitrate and Tandem Detection for the AMR-WB Codec with Application to Network Testing.\n \n \n \n \n\n\n \n Hübschen, T.; and Schmidt, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2519-2523, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BitratePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553360,\n  author = {T. Hübschen and G. Schmidt},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Bitrate and Tandem Detection for the AMR-WB Codec with Application to Network Testing},\n  year = {2018},\n  pages = {2519-2523},\n  abstract = {In network testing, identifying the cause for an observed speech quality degradation is of special interest. Common speech codec related causes to be identified are the application of a low bitrate or the occurrence of transcoding or self-tandem. This paper presents two comprehensible types of signal features which enable a speech-quality-motivated bitrate detection for the AMR-WB codec. The first type of feature is based on codec linearity, while the second type exploits the different structure of the fixed codebook at each bitrate. With these underlying features, the bitrate detection is performed with high accuracy. Since the one feature gathers information on the last applied bitrate and the other on coding effects accumulated during the entire transmission, this paper, additionally, provides a method to extract information on the occurrence of self-tandem in the network-under-test.},\n  keywords = {speech codecs;speech coding;transcoding;network-under-test;tandem detection;AMR-WB codec;common speech codec;speech-quality-motivated bitrate detection;codec linearity;speech quality degradation;bitrate detection;Bit rate;Correlation;Speech coding;Feature extraction;Testing;Speech codecs;network testing;AMR-WB;bitrate;self-tandem;listening quality},\n  doi = {10.23919/EUSIPCO.2018.8553360},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438636.pdf},\n}\n\n
\n
\n\n\n
\n In network testing, identifying the cause for an observed speech quality degradation is of special interest. Common speech codec related causes to be identified are the application of a low bitrate or the occurrence of transcoding or self-tandem. This paper presents two comprehensible types of signal features which enable a speech-quality-motivated bitrate detection for the AMR-WB codec. The first type of feature is based on codec linearity, while the second type exploits the different structure of the fixed codebook at each bitrate. With these underlying features, the bitrate detection is performed with high accuracy. Since the one feature gathers information on the last applied bitrate and the other on coding effects accumulated during the entire transmission, this paper, additionally, provides a method to extract information on the occurrence of self-tandem in the network-under-test.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n In Search for Improved Auxiliary Particle Filters.\n \n \n \n \n\n\n \n Elvira, V.; Martino, L.; Bugallo, M. F.; and Djurić, P. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1637-1641, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"InPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553361,\n  author = {V. Elvira and L. Martino and M. F. Bugallo and P. M. Djurić},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {In Search for Improved Auxiliary Particle Filters},\n  year = {2018},\n  pages = {1637-1641},\n  abstract = {In designing a particle filter, the most important task is choosing the importance function that can generate good particles. If the importance function, also called proposal, does a satisfactory job, the particles of the filter are placed in parts where the explored state space has high probability mass. Further, the weights of these particles are not too disparate in values. An important class of particle filtering that uses a clever approach to create good importance functions is known as auxiliary particle filtering. In this paper, we first analyze the approximations used for computing the particle weights of the standard auxiliary particle filter. We show that these approximations can be detrimental to the performance of the auxiliary particle filter. Further, we propose a more comprehensive evaluation of the weights, which leads to a much enhanced performance of the auxiliary particle filter. We also demonstrate the improvements with computer simulations.},\n  keywords = {approximation theory;particle filtering (numerical methods);probability;improved standard auxiliary particle filtering;probability;approximation theory;Proposals;Kernel;Probability density function;Monte Carlo methods;Standards;Approximation algorithms;Signal processing algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553361},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437569.pdf},\n}\n\n
\n
\n\n\n
\n In designing a particle filter, the most important task is choosing the importance function that can generate good particles. If the importance function, also called proposal, does a satisfactory job, the particles of the filter are placed in parts where the explored state space has high probability mass. Further, the weights of these particles are not too disparate in values. An important class of particle filtering that uses a clever approach to create good importance functions is known as auxiliary particle filtering. In this paper, we first analyze the approximations used for computing the particle weights of the standard auxiliary particle filter. We show that these approximations can be detrimental to the performance of the auxiliary particle filter. Further, we propose a more comprehensive evaluation of the weights, which leads to a much enhanced performance of the auxiliary particle filter. We also demonstrate the improvements with computer simulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient and Stable Joint Eigenvalue Decomposition Based on Generalized Givens Rotations.\n \n \n \n \n\n\n \n Mesloub, A.; Belouchrani, A.; and Abed-Meraim, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1247-1251, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553362,\n  author = {A. Mesloub and A. Belouchrani and K. Abed-Meraim},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient and Stable Joint Eigenvalue Decomposition Based on Generalized Givens Rotations},\n  year = {2018},\n  pages = {1247-1251},\n  abstract = {In the present paper, a new joint eigenvalue decomposition (JEVD) method is developed by considering generalized Givens rotations. This method deals with a set of square matrices sharing a same eigenstructure. Several Jacobi-like methods exist already for solving the aforementioned problem. The differences reside in the way of estimating the Shear rotation. Herein, we clarify these differences, highlight the weaknesses of the existing solutions and develop a new robust method named Efficient and Stable Joint eigenvalue Decomposition (ESJD). Simulation results are provided to highlight the effectiveness of the proposed technique especially in difficult scenario.},\n  keywords = {eigenvalues and eigenfunctions;iterative methods;matrix algebra;Stable Joint eigenvalue Decomposition;generalized Givens rotations;joint eigenvalue decomposition method;Jacobi-like methods;Shear rotation;robust method;ESJD;square matrices;Signal processing algorithms;Signal processing;Eigenvalues and eigenfunctions;Europe;Matrix decomposition;Symmetric matrices;Jacobian matrices;Joint EigenValue Decomposition (JEVD);Efficient and Stable Joint eigenvalue Decomposition algorithm (ESJD);generalized Givens rotations;exact JEVD;approximative JEVD},\n  doi = {10.23919/EUSIPCO.2018.8553362},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436837.pdf},\n}\n\n
\n
\n\n\n
\n In the present paper, a new joint eigenvalue decomposition (JEVD) method is developed by considering generalized Givens rotations. This method deals with a set of square matrices sharing a same eigenstructure. Several Jacobi-like methods exist already for solving the aforementioned problem. The differences reside in the way of estimating the Shear rotation. Herein, we clarify these differences, highlight the weaknesses of the existing solutions and develop a new robust method named Efficient and Stable Joint eigenvalue Decomposition (ESJD). Simulation results are provided to highlight the effectiveness of the proposed technique especially in difficult scenario.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Energy Balancing for Robotic Aided Clustered Wireless Sensor Networks Using Mobility Diversity Algorithms.\n \n \n \n \n\n\n \n Licea, D. B.; Nurellari, E.; and Ghogho, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1815-1819, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EnergyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553363,\n  author = {D. B. Licea and E. Nurellari and M. Ghogho},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Energy Balancing for Robotic Aided Clustered Wireless Sensor Networks Using Mobility Diversity Algorithms},\n  year = {2018},\n  pages = {1815-1819},\n  abstract = {We consider the problem of energy balancing in a clustered wireless sensor network (WSN) deployed randomly in a large field and aided by a mobile robot (MR). The sensor nodes (SNs) are tasked with monitoring a region of interest (ROI) and reporting their test statistics to the cluster heads (CHs), which they subsequently report to the fusion center (FC) over a wireless fading channel. To maximize the lifetime of the WSN, the MR is deployed to act as an adaptive relay between a subset of the CHs and the FC. To achieve this we develop a multiple-link mobility diversity algorithm (MDA) executed by the MR that will allow to compensate simultaneously for the small-scale fading at the established wireless links (i.e., the MR-to-FC as well as various CH-to-MR communication links). Simulation results show that the proposed MR aided technique is able to significantly reduce the transmission power required and thus extend the operational lifetime of the WSN. We also show how the effect of small-scale fading at various wireless links is mitigated by using the proposed multiple -link MDA executed by a MR equipped with a single antenna.},\n  keywords = {fading channels;mobile robots;wireless sensor networks;wireless links;mobility diversity algorithms;energy balancing;multiple -link MDA;MR aided technique;CH-to-MR communication links;small-scale fading;multiple-link mobility diversity algorithm;adaptive relay;wireless fading channel;cluster heads;sensor nodes;mobile robot;WSN;clustered wireless sensor network;Wireless sensor networks;Fading channels;Robot sensing systems;Wireless communication;Signal processing algorithms;Relays;Shadow mapping;Wireless sensor network;cluster;mobile robot;fading;mobility diversity},\n  doi = {10.23919/EUSIPCO.2018.8553363},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436726.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of energy balancing in a clustered wireless sensor network (WSN) deployed randomly in a large field and aided by a mobile robot (MR). The sensor nodes (SNs) are tasked with monitoring a region of interest (ROI) and reporting their test statistics to the cluster heads (CHs), which they subsequently report to the fusion center (FC) over a wireless fading channel. To maximize the lifetime of the WSN, the MR is deployed to act as an adaptive relay between a subset of the CHs and the FC. To achieve this we develop a multiple-link mobility diversity algorithm (MDA) executed by the MR that will allow to compensate simultaneously for the small-scale fading at the established wireless links (i.e., the MR-to-FC as well as various CH-to-MR communication links). Simulation results show that the proposed MR aided technique is able to significantly reduce the transmission power required and thus extend the operational lifetime of the WSN. We also show how the effect of small-scale fading at various wireless links is mitigated by using the proposed multiple -link MDA executed by a MR equipped with a single antenna.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification Between Abnormal and Normal Respiration Through Observation Rate of Heart Sounds Within Lung Sounds.\n \n \n \n \n\n\n \n Ohkawa, K.; Yamashita, M.; and Matsunaga, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1142-1146, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553364,\n  author = {K. Ohkawa and M. Yamashita and S. Matsunaga},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification Between Abnormal and Normal Respiration Through Observation Rate of Heart Sounds Within Lung Sounds},\n  year = {2018},\n  pages = {1142-1146},\n  abstract = {This paper proposes an effective classification method to differentiate between normal and abnormal lung sounds, which takes into account the detection level of heart sounds. Abnormal lung sounds frequently contain adventitious sounds; however, misclassification between heart sounds and adventitious sounds makes it difficult to achieve a high level of accuracy. Furthermore, the classification performance of conventional methods, which use the detection function of heart sounds, becomes worse for those lung sounds which contain a low level of heart sounds. To address this problem, our proposed method changes the classification method according to the detection rate of heart sounds, whereby if the rate was high, the heart-sound models in the HMM -based classification method were used. In addition to spectral information, temporal information of heart sounds and adventitious sounds were also used to obtain the rate more precisely. When using lung sounds from three auscultation points, the proposed method achieved a higher classification performance of 89.90% (between normal and abnormal respiration) compared to 88.7% for the conventional method, which used the detection function of heart sounds. Our approach to the classification of healthy and unhealthy subjects also achieved a higher classification rate of 86.6%, compared to 83.1 % when using the conventional method having the detection function of heart sounds.},\n  keywords = {cardiology;hidden Markov models;lung;medical signal processing;pneumodynamics;signal classification;adventitious sounds;classification method;heart-sound models;auscultation points;temporal information;spectral information;HMM-based classification method;detection rate;detection level;effective classification method;abnormal respiration;normal respiration;abnormal lung sounds;normal lung sounds;Heart;Hidden Markov models;Acoustics;Lung;Europe;Signal processing;Probability density function;lung sound;HMM;classification;heart sound;adventitious sound},\n  doi = {10.23919/EUSIPCO.2018.8553364},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437892.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes an effective classification method to differentiate between normal and abnormal lung sounds, which takes into account the detection level of heart sounds. Abnormal lung sounds frequently contain adventitious sounds; however, misclassification between heart sounds and adventitious sounds makes it difficult to achieve a high level of accuracy. Furthermore, the classification performance of conventional methods, which use the detection function of heart sounds, becomes worse for those lung sounds which contain a low level of heart sounds. To address this problem, our proposed method changes the classification method according to the detection rate of heart sounds, whereby if the rate was high, the heart-sound models in the HMM -based classification method were used. In addition to spectral information, temporal information of heart sounds and adventitious sounds were also used to obtain the rate more precisely. When using lung sounds from three auscultation points, the proposed method achieved a higher classification performance of 89.90% (between normal and abnormal respiration) compared to 88.7% for the conventional method, which used the detection function of heart sounds. Our approach to the classification of healthy and unhealthy subjects also achieved a higher classification rate of 86.6%, compared to 83.1 % when using the conventional method having the detection function of heart sounds.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n New Results on LMVDR Estimators for LDSS Models.\n \n \n \n \n\n\n \n Chaumette, E.; Vincent, F.; Priot, B.; Pages, G.; and Dion, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1332-1336, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NewPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553365,\n  author = {E. Chaumette and F. Vincent and B. Priot and G. Pages and A. Dion},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {New Results on LMVDR Estimators for LDSS Models},\n  year = {2018},\n  pages = {1332-1336},\n  abstract = {In the context of linear discrete state-space (LDSS) models, we generalize a result lately introduced in the restricted case of invertible state matrices, namely that the linear minimum variance distortionless response (LMVDR) filter shares exactly the same recursion as the linear least mean squares (LLMS) filter, aka the Kalman filter (KF), except for the initialization. An immediate benefit is the introduction of LMVDR fixed-point and fixed-lag smoothers (and possibly other smoothers or predictors), which has not been possible so far. This result is particularly noteworthy given the fact that, although LMVDR estimators are sub-optimal in mean-squared error sense, they are infinite impulse response distortionless estimators which do not depend on the prior knowledge on the mean and covariance matrix of the initial state. Thus the LMVDR estimators may outperform the usual LLMS estimators in case of misspecification of the prior knowledge on the initial state. Seen from this perspective, we also show that the LMVDR filter can be regarded as a generalization of the information filter form of the KF. On another note, LMVDR estimators may also allow to derive unexpected results, as highlighted with the LMVDR fixed-point smoother.},\n  keywords = {covariance matrices;estimation theory;filtering theory;Kalman filters;least mean squares methods;mean square error methods;smoothing methods;state-space methods;linear minimum variance distortionless response filter;unexpected results;information filter form;LMVDR filter;usual LLMS estimators;infinite impulse response distortionless estimators;mean-squared error sense;fixed-lag smoothers;LMVDR fixed-point;Kalman filter;linear least mean squares filter;invertible state matrices;linear discrete state-space models;LDSS models;LMVDR estimators;Covariance matrices;Mathematical model;Time measurement;Lead;Minimization;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553365},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432811.pdf},\n}\n\n
\n
\n\n\n
\n In the context of linear discrete state-space (LDSS) models, we generalize a result lately introduced in the restricted case of invertible state matrices, namely that the linear minimum variance distortionless response (LMVDR) filter shares exactly the same recursion as the linear least mean squares (LLMS) filter, aka the Kalman filter (KF), except for the initialization. An immediate benefit is the introduction of LMVDR fixed-point and fixed-lag smoothers (and possibly other smoothers or predictors), which has not been possible so far. This result is particularly noteworthy given the fact that, although LMVDR estimators are sub-optimal in mean-squared error sense, they are infinite impulse response distortionless estimators which do not depend on the prior knowledge on the mean and covariance matrix of the initial state. Thus the LMVDR estimators may outperform the usual LLMS estimators in case of misspecification of the prior knowledge on the initial state. Seen from this perspective, we also show that the LMVDR filter can be regarded as a generalization of the information filter form of the KF. On another note, LMVDR estimators may also allow to derive unexpected results, as highlighted with the LMVDR fixed-point smoother.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative Reconstruction of Spectrally Sparse Signals from Level Crossings.\n \n \n \n \n\n\n \n Mashhadi, M. B.; Zayyani, H.; Gazor, S.; and Marvasti, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 435-439, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553367,\n  author = {M. B. Mashhadi and H. Zayyani and S. Gazor and F. Marvasti},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative Reconstruction of Spectrally Sparse Signals from Level Crossings},\n  year = {2018},\n  pages = {435-439},\n  abstract = {This paper considers the problem of sparse signal reconstruction from the timing of its Level Crossings (LC)s. We formulate the sparse Zero Crossing (ZC) reconstruction problem in terms of a single L-blt Compressive Sensing (CS) model. We also extend the Smoothed LO (SLO) sparse reconstruction algorithm to the I-bit CS framework and propose the Binary SLO (BSLO) algorithm for iterative reconstruction of the sparse signal from ZCs in cases where the number of sparse coefficients is not known to the reconstruction algorithm a priori. Similar to the ZC case, we propose a system of simultaneously constrained signed-CS problems to reconstruct a sparse signal from its Level Crossings (LC)s and modify both the Binary Iterative Hard Thresholding (BIHT) and BSLO algorithms to solve this problem. Simulation results demonstrate superior performance of the proposed LC reconstruction techniques in comparison with the literature.},\n  keywords = {compressed sensing;iterative methods;signal reconstruction;Smoothed LO sparse reconstruction algorithm;I-bit CS framework;Binary SLO algorithm;iterative reconstruction;sparse coefficients;ZC case;signed-CS problems;Level Crossings;LC reconstruction techniques;spectrally sparse signals;sparse signal reconstruction;sparse Zero Crossing reconstruction problem;single L-blt Compressive Sensing model;Signal processing algorithms;Reconstruction algorithms;Simulation;Approximation algorithms;Europe;Signal processing;Compressed sensing},\n  doi = {10.23919/EUSIPCO.2018.8553367},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570429397.pdf},\n}\n\n
\n
\n\n\n
\n This paper considers the problem of sparse signal reconstruction from the timing of its Level Crossings (LC)s. We formulate the sparse Zero Crossing (ZC) reconstruction problem in terms of a single L-blt Compressive Sensing (CS) model. We also extend the Smoothed LO (SLO) sparse reconstruction algorithm to the I-bit CS framework and propose the Binary SLO (BSLO) algorithm for iterative reconstruction of the sparse signal from ZCs in cases where the number of sparse coefficients is not known to the reconstruction algorithm a priori. Similar to the ZC case, we propose a system of simultaneously constrained signed-CS problems to reconstruct a sparse signal from its Level Crossings (LC)s and modify both the Binary Iterative Hard Thresholding (BIHT) and BSLO algorithms to solve this problem. Simulation results demonstrate superior performance of the proposed LC reconstruction techniques in comparison with the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Road Surface Crack Detection using a Light Field Camera.\n \n \n \n \n\n\n \n Fernandes, D.; Correia, P. L.; and Oliveira, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2135-2139, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RoadPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553368,\n  author = {D. Fernandes and P. L. Correia and H. Oliveira},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Road Surface Crack Detection using a Light Field Camera},\n  year = {2018},\n  pages = {2135-2139},\n  abstract = {During traditional road surveys, inspectors capture images of pavement surface using cameras that produce 2D images, which can then be automatically processed to get a road surface condition assessment. This paper proposes a novel crack detection system that uses a light field imaging sensor, notably the Lytro Illum camera, instead of a conventional 2D camera, to capture road surface light field images. Light field images capture the light rays originating from different directions, thus providing a richer representation of the observed scene. The proposed system explores the disparity information, which can be computed from the light field, to obtain information about cracks observable in the pavement images. A simple processing system is considered, to show the potential use of this type of sensors for crack detection. Encouraging experimental crack detection results are presented based on a set of road pavement light field images captured over different pavement surface textures. A performance comparison with a state-of-the-art 2D image crack detection system is included, confirming the potential of using this type of sensors.},\n  keywords = {cameras;crack detection;image representation;image sensors;optical sensors;roads;surface topography measurement;2D camera;road surface light field imaging sensor;road pavement light field imaging;2D image crack detection system;pavement surface textures;image representation;Lytro Illum camera;road surface condition assessment;light field camera;road surface crack detection;Two dimensional displays;Roads;Cameras;Surface cracks;Surface treatment;Light field imaging;road crack detection;image processing},\n  doi = {10.23919/EUSIPCO.2018.8553368},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439750.pdf},\n}\n\n
\n
\n\n\n
\n During traditional road surveys, inspectors capture images of pavement surface using cameras that produce 2D images, which can then be automatically processed to get a road surface condition assessment. This paper proposes a novel crack detection system that uses a light field imaging sensor, notably the Lytro Illum camera, instead of a conventional 2D camera, to capture road surface light field images. Light field images capture the light rays originating from different directions, thus providing a richer representation of the observed scene. The proposed system explores the disparity information, which can be computed from the light field, to obtain information about cracks observable in the pavement images. A simple processing system is considered, to show the potential use of this type of sensors for crack detection. Encouraging experimental crack detection results are presented based on a set of road pavement light field images captured over different pavement surface textures. A performance comparison with a state-of-the-art 2D image crack detection system is included, confirming the potential of using this type of sensors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast Convergence for Stochastic and Distributed Gradient Descent in the Interpolation Limit.\n \n \n \n \n\n\n \n Mitra, P. P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1890-1894, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553369,\n  author = {P. P. Mitra},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Convergence for Stochastic and Distributed Gradient Descent in the Interpolation Limit},\n  year = {2018},\n  pages = {1890-1894},\n  abstract = {Modern supervised learning techniques, particularly those using deep nets, involve fitting high dimensional labelled data sets with functions containing very large numbers of parameters. Much of this work is empirical. Interesting phenomena have been observed that require theoretical explanations; however the non-convexity of the loss functions complicates the analysis. Recently it has been proposed that the success of these techniques rests partly in the effectiveness of the simple stochastic gradient descent algorithm in the so called interpolation limit in which all labels are fit perfectly. This analysis is made possible since the SGD algorithm reduces to a stochastic linear system near the interpolating minimum of the loss function. Here we exploit this insight by presenting and analyzing a new distributed algorithm for gradient descent, also in the interpolating limit. The distributed SGD algorithm presented in the paper corresponds to gradient descent applied to a simple penalized distributed loss function, L(w1, ..., wn) = Σili(wi) + μ Σ<;i,j><;/i,j> |wi - wj|2. Here each node holds only one sample, and its own parameter vector. The notation <; i, j > denotes edges of a connected graph defining the communication links between nodes. It is shown that this distributed algorithm converges linearly (ie the error reduces exponentially with iteration number), with a rate 1-η/nλmin(H) <; R <; 1 where λmin(H) is the smallest nonzero eigenvalue of the sample covariance or the Hessian H. In contrast with previous usage of similar penalty functions to enforce consensus between nodes, in the interpolating limit it is not required to take the penalty parameter to infinity for consensus to occur. The analysis further reinforces the utility of the interpolation limit in the theoretical treatment of modern machine learning algorithms.},\n  keywords = {distributed algorithms;eigenvalues and eigenfunctions;gradient methods;graph theory;Hessian matrices;interpolation;learning (artificial intelligence);stochastic processes;fast convergence;deep nets;high dimensional labelled data sets;stochastic linear system;distributed SGD algorithm;distributed gradient descent algorithm;stochastic gradient descent algorithm;interpolation limit;penalty functions;machine learning algorithms;penalized distributed loss function;graph;parameter vector;eigenvalue;communication links;supervised learning techniques;Interpolation;Signal processing algorithms;Convergence;Parallel processing;Distributed algorithms;Null space;Distributed databases;Interpolating limit;Overfitting;Stochastic Gradient Descent;Distributed Gradient Descent},\n  doi = {10.23919/EUSIPCO.2018.8553369},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437817.pdf},\n}\n\n
\n
\n\n\n
\n Modern supervised learning techniques, particularly those using deep nets, involve fitting high dimensional labelled data sets with functions containing very large numbers of parameters. Much of this work is empirical. Interesting phenomena have been observed that require theoretical explanations; however the non-convexity of the loss functions complicates the analysis. Recently it has been proposed that the success of these techniques rests partly in the effectiveness of the simple stochastic gradient descent algorithm in the so called interpolation limit in which all labels are fit perfectly. This analysis is made possible since the SGD algorithm reduces to a stochastic linear system near the interpolating minimum of the loss function. Here we exploit this insight by presenting and analyzing a new distributed algorithm for gradient descent, also in the interpolating limit. The distributed SGD algorithm presented in the paper corresponds to gradient descent applied to a simple penalized distributed loss function, L(w1, ..., wn) = Σili(wi) + μ Σ<;i,j><;/i,j> |wi - wj|2. Here each node holds only one sample, and its own parameter vector. The notation <; i, j > denotes edges of a connected graph defining the communication links between nodes. It is shown that this distributed algorithm converges linearly (ie the error reduces exponentially with iteration number), with a rate 1-η/nλmin(H) <; R <; 1 where λmin(H) is the smallest nonzero eigenvalue of the sample covariance or the Hessian H. In contrast with previous usage of similar penalty functions to enforce consensus between nodes, in the interpolating limit it is not required to take the penalty parameter to infinity for consensus to occur. The analysis further reinforces the utility of the interpolation limit in the theoretical treatment of modern machine learning algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Emotion Estimation in Crowds: The Interplay of Motivations and Expectations in Individual Emotions.\n \n \n \n \n\n\n \n Urizar, O. J.; Marcenaro, L.; Regazzoni, C. S.; Barakova, E. I.; and Rauterberg, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1092-1096, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EmotionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553370,\n  author = {O. J. Urizar and L. Marcenaro and C. S. Regazzoni and E. I. Barakova and M. Rauterberg},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Emotion Estimation in Crowds: The Interplay of Motivations and Expectations in Individual Emotions},\n  year = {2018},\n  pages = {1092-1096},\n  abstract = {Providing an estimation of the emotional states of individuals increases the insights on the state of a crowd beyond simple normal/abnormal situations or behaviour classification. Methods intended for identifying emotions in individuals are mainly based on facial and body expressions, or even physiological measurements which are not suited for crowded environments as the available information in crowds is usually limited to that provided by surveillance cameras where the face and body of pedestrians can often suffer from occlusion. This work proposes an approach for analysing walking behaviour and exploiting the interplay of motivations and expectations in the emotions of pedestrians. Real-world data is used to test the prediction of motivations and annotations on the emotional state of pedestrians are added to evaluate the proposed method's capability to estimate emotional states. The conducted experiments show significant improvements over previous methods for estimating motivations and consistent results to the estimation of emotions.},\n  keywords = {emotion recognition;pedestrians;traffic engineering computing;pedestrian body;simple normal-abnormal situations;pedestrian face;walking behaviour analysis;crowded environments;facial body expressions;behaviour classification;emotional state;individual emotions;emotion estimation;Legged locomotion;Hidden Markov models;Estimation;Trajectory;Europe;Predictive models;Signal processing;pedestrian emotions;emotion estimation;affective models;crowd emotions},\n  doi = {10.23919/EUSIPCO.2018.8553370},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437320.pdf},\n}\n\n
\n
\n\n\n
\n Providing an estimation of the emotional states of individuals increases the insights on the state of a crowd beyond simple normal/abnormal situations or behaviour classification. Methods intended for identifying emotions in individuals are mainly based on facial and body expressions, or even physiological measurements which are not suited for crowded environments as the available information in crowds is usually limited to that provided by surveillance cameras where the face and body of pedestrians can often suffer from occlusion. This work proposes an approach for analysing walking behaviour and exploiting the interplay of motivations and expectations in the emotions of pedestrians. Real-world data is used to test the prediction of motivations and annotations on the emotional state of pedestrians are added to evaluate the proposed method's capability to estimate emotional states. The conducted experiments show significant improvements over previous methods for estimating motivations and consistent results to the estimation of emotions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Injecting Image Priors into Learnable Compressive Subsampling.\n \n \n \n \n\n\n \n Ferrari, M.; Taran, O.; Holotyak, T.; Egiazarian, K.; and Voloshynovskiy, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1735-1739, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"InjectingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553371,\n  author = {M. Ferrari and O. Taran and T. Holotyak and K. Egiazarian and S. Voloshynovskiy},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Injecting Image Priors into Learnable Compressive Subsampling},\n  year = {2018},\n  pages = {1735-1739},\n  abstract = {Many medical (computerized tomography, magnetic resonance imaging) and astronomy imaging problems (Square Kilometre Array), spectroscopy and Fourier optics attempt at reconstructing high quality images in the pixel domain from a limited number of samples in the frequency domain. In this paper, we extend the problem formulation of learnable compressive subsampling [1] that focuses on the learning of the best sampling operator in the Fourier domain adapted to spectral properties of training set of images. We formulate the problem as a reconstruction from a finite number of sparse samples with a prior learned from the external dataset or learned on-fly for the image to be reconstructed. The proposed methods are tested on diverse datasets covering facial images, medical and multi-band astronomical applications using the mean square error and SSIM as a perceptual measure of reconstruction. The obtained results demonstrate some interesting properties of proposed methods that might be of interest for future research and extensions.},\n  keywords = {astronomical image processing;computerised tomography;face recognition;gamma-ray bursts;image reconstruction;image sampling;learning (artificial intelligence);mean square error methods;frequency domain;pixel domain;high quality images;Square Kilometre Array;magnetic resonance imaging;computerized tomography;learnable compressive subsampling;mean square error;facial images;external dataset;sparse samples;finite number;spectral properties;Fourier domain;sampling operator;Training;Image reconstruction;Transforms;Image coding;Europe;Signal processing;Imaging;Compressive sensing;learnable compressive subsampling;support learning;reconstruction;deep priors},\n  doi = {10.23919/EUSIPCO.2018.8553371},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437357.pdf},\n}\n\n
\n
\n\n\n
\n Many medical (computerized tomography, magnetic resonance imaging) and astronomy imaging problems (Square Kilometre Array), spectroscopy and Fourier optics attempt at reconstructing high quality images in the pixel domain from a limited number of samples in the frequency domain. In this paper, we extend the problem formulation of learnable compressive subsampling [1] that focuses on the learning of the best sampling operator in the Fourier domain adapted to spectral properties of training set of images. We formulate the problem as a reconstruction from a finite number of sparse samples with a prior learned from the external dataset or learned on-fly for the image to be reconstructed. The proposed methods are tested on diverse datasets covering facial images, medical and multi-band astronomical applications using the mean square error and SSIM as a perceptual measure of reconstruction. The obtained results demonstrate some interesting properties of proposed methods that might be of interest for future research and extensions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Spectral MAB for Unknown Graph Processes.\n \n \n \n \n\n\n \n Toni, L.; and Frossard, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 116-120, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpectralPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553372,\n  author = {L. Toni and P. Frossard},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Spectral MAB for Unknown Graph Processes},\n  year = {2018},\n  pages = {116-120},\n  abstract = {In this work, we study graph-based multi-arms bandit (MAB) problems aimed at optimizing actions on irregular and high-dimensional graphs. More formally, we consider a decision-maker that takes sequential actions over time and observes the experienced reward, defined as a function of a sparse graph signal. The goal is to optimize the action policy, which maximizes the reward experienced over time. The main challenges are represented by the system uncertainty (i.e., unknown parameters of the sparse graph signal model) and the high-dimensional search space. The uncertainty can be faced by online learning strategies that infer the system dynamics while taking the appropriate actions. However, the high-dimensionality makes online learning strategies highly inefficient. To overcome this limitation, we propose a novel graph-based MAB algorithm, which is data-efficient also in high-dimensional systems. The key intuition is to infer the nature of the graph processes by learning in the graph-spectral domain, and exploit this knowledge while optimizing the actions. In particular, we model the graph signal with a sparse dictionary-based representation and we propose an online sequential decision strategy that learns the parameters of the graph processes while optimizing the action strategy.},\n  keywords = {decision making;graph theory;learning (artificial intelligence);decision making;graph-based MAB algorithm;high-dimensional search space;sparse graph signal model;action policy;high-dimensional graphs;multiarms bandit problems;unknown graph processes;spectral MAB;action strategy;online sequential decision strategy;sparse dictionary-based representation;graph-spectral domain;online learning strategies;Signal processing algorithms;Kernel;Signal processing;Social network services;Optimization;Europe;Machine learning},\n  doi = {10.23919/EUSIPCO.2018.8553372},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439153.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we study graph-based multi-arms bandit (MAB) problems aimed at optimizing actions on irregular and high-dimensional graphs. More formally, we consider a decision-maker that takes sequential actions over time and observes the experienced reward, defined as a function of a sparse graph signal. The goal is to optimize the action policy, which maximizes the reward experienced over time. The main challenges are represented by the system uncertainty (i.e., unknown parameters of the sparse graph signal model) and the high-dimensional search space. The uncertainty can be faced by online learning strategies that infer the system dynamics while taking the appropriate actions. However, the high-dimensionality makes online learning strategies highly inefficient. To overcome this limitation, we propose a novel graph-based MAB algorithm, which is data-efficient also in high-dimensional systems. The key intuition is to infer the nature of the graph processes by learning in the graph-spectral domain, and exploit this knowledge while optimizing the actions. In particular, we model the graph signal with a sparse dictionary-based representation and we propose an online sequential decision strategy that learns the parameters of the graph processes while optimizing the action strategy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Light Field Image Coding with Depth Estimation and View Synthesis.\n \n \n \n \n\n\n \n Senoh, T.; Yamamoto, K.; Tetsutani, N.; and Yasuda, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1840-1844, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553373,\n  author = {T. Senoh and K. Yamamoto and N. Tetsutani and H. Yasuda},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Light Field Image Coding with Depth Estimation and View Synthesis},\n  year = {2018},\n  pages = {1840-1844},\n  abstract = {Efficient light field image coding method is proposed based on conversion to multi-view image, depth estimation, multi-view coding, and view synthesis. Firstly, compatibility of light field image and multi-view image is discussed, and then depth estimation method based on texture-edge-aware horizontal and vertical view matching and depth-smoothing is explained. Secondly, a view-synthesis method from up to four reference views is proposed, which adopts depth-base occlusion hole inpainting. Finally, by combining these methods together with a hierarchical bi-directional inter-view coding of multi-view image and depth maps, coding results are reported.},\n  keywords = {image coding;image matching;image texture;multiview image;depth estimation method;texture-edge-aware horizontal;vertical view matching;depth-smoothing;view-synthesis method;reference views;depth-base occlusion hole inpainting;hierarchical bi-directional inter-view coding;multiview coding;light field image coding;Estimation;Image coding;Image edge detection;Reliability;Two dimensional displays;Smoothing methods;light field;sub-aperture;multi-view;depth estimation;inter-view prediction;view synthesis;depth-base inpainting},\n  doi = {10.23919/EUSIPCO.2018.8553373},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436397.pdf},\n}\n\n
\n
\n\n\n
\n Efficient light field image coding method is proposed based on conversion to multi-view image, depth estimation, multi-view coding, and view synthesis. Firstly, compatibility of light field image and multi-view image is discussed, and then depth estimation method based on texture-edge-aware horizontal and vertical view matching and depth-smoothing is explained. Secondly, a view-synthesis method from up to four reference views is proposed, which adopts depth-base occlusion hole inpainting. Finally, by combining these methods together with a hierarchical bi-directional inter-view coding of multi-view image and depth maps, coding results are reported.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Connectionist Temporal Classification-based Sound Event Encoder for Converting Sound Events into Onomatopoeic Representations.\n \n \n \n \n\n\n \n Miyazaki, K.; Hayashi, T.; Toda, T.; and Takeda, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 852-856, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ConnectionistPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553374,\n  author = {K. Miyazaki and T. Hayashi and T. Toda and K. Takeda},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Connectionist Temporal Classification-based Sound Event Encoder for Converting Sound Events into Onomatopoeic Representations},\n  year = {2018},\n  pages = {852-856},\n  abstract = {In this paper, we propose a sound event encoder for converting sound events into their onomatopoeic representations. The proposed method uses connectionist temporal classification (CTC) as an end-to-end approach to directly convert a sequence of feature vectors of each sound event into a corresponding onomatopoeic word representation which accurately represents each sound and can be intuitively understood. Moreover, to address the issue of the ambiguity of onomatopoeic representations among different individuals, we develop a database of sound events and their corresponding typical onomatopoeic representations as accepted by multiple listeners. To evaluate the performance of our proposed method, we conduct objective and subjective evaluations. Experimental results demonstrate that the proposed sound event encoder is capable of converting sound events into their onomatopoeic representations with a 74.5% subjective acceptability rating, and that use of typical onomatopoeic representations, as approved by multiple subjects, yields significant improvement, resulting in an acceptability rate of 81.8%.},\n  keywords = {acoustic signal processing;audio signal processing;connectionist temporal classification-based sound event encoder;onomatopoeic word representations;sound events;CTC;Databases;Feature extraction;Training;Europe;Signal processing;Acoustics;Probability;connectionist temporal classification;sound event;onomatopoeia;sound transcription},\n  doi = {10.23919/EUSIPCO.2018.8553374},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437977.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a sound event encoder for converting sound events into their onomatopoeic representations. The proposed method uses connectionist temporal classification (CTC) as an end-to-end approach to directly convert a sequence of feature vectors of each sound event into a corresponding onomatopoeic word representation which accurately represents each sound and can be intuitively understood. Moreover, to address the issue of the ambiguity of onomatopoeic representations among different individuals, we develop a database of sound events and their corresponding typical onomatopoeic representations as accepted by multiple listeners. To evaluate the performance of our proposed method, we conduct objective and subjective evaluations. Experimental results demonstrate that the proposed sound event encoder is capable of converting sound events into their onomatopoeic representations with a 74.5% subjective acceptability rating, and that use of typical onomatopoeic representations, as approved by multiple subjects, yields significant improvement, resulting in an acceptability rate of 81.8%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Morphing Detection Using a General- Purpose Face Recognition System.\n \n \n \n\n\n \n Wandzik, L.; Kaeding, G.; and Garcia, R. V.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1012-1016, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553375,\n  author = {L. Wandzik and G. Kaeding and R. V. Garcia},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Morphing Detection Using a General- Purpose Face Recognition System},\n  year = {2018},\n  pages = {1012-1016},\n  abstract = {Image morphing has proven to be very successful at deceiving facial recognition systems. Such a vulnerability can be critical when exploited in an automatic border control scenario. Recent works on this topic rely on dedicated algorithms which require additional software modules deployed alongside an existing facial recognition system. In this work, we address the problem of morphing detection by using state-of-the-art facial recognition algorithms based on hand-crafted features and deep convolutional neural networks. We show that a general-purpose face recognition system combined with a simple linear classifier can be successfully used as a morphing detector. The proposed method reuses an existing feature extraction pipeline instead of introducing additional modules. It requires neither fine-tuning nor modifications to the existing recognition system and can be trained using only a small dataset. The proposed approach achieves state-of-the-art performance on our morphing datasets using a 5-fold cross-validation.},\n  keywords = {face recognition;feature extraction;feedforward neural nets;image morphing;learning (artificial intelligence);object detection;morphing detection;image morphing;facial recognition systems;automatic border control scenario;general-purpose face recognition system;morphing detector;morphing datasets;five-fold cross-validation;recognition system;feature extraction pipeline;facial recognition algorithms;facial recognition system;software modules;Face;Feature extraction;Face recognition;Task analysis;Support vector machines;Europe;Signal processing;face recognition;biometric anti-spoofing;face morphing;deep learning},\n  doi = {10.23919/EUSIPCO.2018.8553375},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Image morphing has proven to be very successful at deceiving facial recognition systems. Such a vulnerability can be critical when exploited in an automatic border control scenario. Recent works on this topic rely on dedicated algorithms which require additional software modules deployed alongside an existing facial recognition system. In this work, we address the problem of morphing detection by using state-of-the-art facial recognition algorithms based on hand-crafted features and deep convolutional neural networks. We show that a general-purpose face recognition system combined with a simple linear classifier can be successfully used as a morphing detector. The proposed method reuses an existing feature extraction pipeline instead of introducing additional modules. It requires neither fine-tuning nor modifications to the existing recognition system and can be trained using only a small dataset. The proposed approach achieves state-of-the-art performance on our morphing datasets using a 5-fold cross-validation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Deep Learning for Inverse Problems.\n \n \n \n \n\n\n \n Amjad, J.; Sokolić, J.; and Rodrigues, M. R. D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1895-1899, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553376,\n  author = {J. Amjad and J. Sokolić and M. R. D. Rodrigues},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On Deep Learning for Inverse Problems},\n  year = {2018},\n  pages = {1895-1899},\n  abstract = {This paper analyses the generalization behaviour of a deep neural networks with a focus on their use in inverse problems. In particular, by leveraging the robustness framework by Xu and Mannor, we provide deep neural network based regression generalization bounds that are also specialized to sparse approximation problems. The proposed bounds show that the sparse approximation performance of deep neural networks can be potentially superior to that of classical sparse reconstruction algorithms, with reconstruction errors limited only by the noise level independently of the underlying data.},\n  keywords = {approximation theory;generalisation (artificial intelligence);inverse problems;learning (artificial intelligence);neural nets;regression analysis;deep learning;inverse problems;generalization behaviour;deep neural networks;robustness framework;deep neural network based regression generalization bounds;approximation problems;sparse approximation performance;classical sparse reconstruction algorithms;reconstruction errors;Inverse problems;Neural networks;Robustness;Training;Measurement;Partitioning algorithms;Signal processing algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553376},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437753.pdf},\n}\n\n
\n
\n\n\n
\n This paper analyses the generalization behaviour of a deep neural networks with a focus on their use in inverse problems. In particular, by leveraging the robustness framework by Xu and Mannor, we provide deep neural network based regression generalization bounds that are also specialized to sparse approximation problems. The proposed bounds show that the sparse approximation performance of deep neural networks can be potentially superior to that of classical sparse reconstruction algorithms, with reconstruction errors limited only by the noise level independently of the underlying data.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast and Accurate Multiclass Inference for MI-BCIs Using Large Multiscale Temporal and Spectral Features.\n \n \n \n \n\n\n \n Hersche, M.; Rellstab, T.; Schiavone, P. D.; Cavigelli, L.; Benini, L.; and Rahimi, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1690-1694, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553378,\n  author = {M. Hersche and T. Rellstab and P. D. Schiavone and L. Cavigelli and L. Benini and A. Rahimi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast and Accurate Multiclass Inference for MI-BCIs Using Large Multiscale Temporal and Spectral Features},\n  year = {2018},\n  pages = {1690-1694},\n  abstract = {Accurate, fast, and reliable multiclass classification of electroencephalography (EEG) signals is a challenging task towards the development of motor imagery brain-computer interface (MI-BCI) systems. We propose enhancements to different feature extractors, along with a support vector machine (SVM) classifier, to simultaneously improve classification accuracy and execution time during training and testing. We focus on the well-known common spatial pattern (CSP) and Riemannian covariance methods, and significantly extend these two feature extractors to multiscale temporal and spectral cases. The multiscale CSP features achieve \\pmb73.70±15.90% (mean± standard deviation across 9 subjects) classification accuracy that surpasses the state-of-the-art method [1], 70.6±14.70%, on the 4-class BCI competition IV-2a dataset. The Riemannian covariance features outperform the CSP by achieving 74.27±15.5% accuracy and executing 9x faster in training and 4x faster in testing. Using more temporal windows for Riemannian features results in 75.47±12.8% accuracy with 1.6x faster testing than CSP.},\n  keywords = {brain-computer interfaces;electroencephalography;feature extraction;medical signal processing;signal classification;support vector machines;common spatial pattern;SVM;EEG;spectral features;large multiscale temporal features;accurate multiclass inference;feature extractors;temporal windows;Riemannian covariance features;4-class BCI competition IV-2a;multiscale CSP features;execution time;classification accuracy;support vector machine classifier;motor imagery brain-computer interface systems;electroencephalography signals;reliable multiclass classification;MI-BCI;Feature extraction;Support vector machines;Electroencephalography;Covariance matrices;Training;Testing;Europe;EEG;motor imagery;brain-computer interfaces;multiclass classification;multiscale features;SVM},\n  doi = {10.23919/EUSIPCO.2018.8553378},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437732.pdf},\n}\n\n
\n
\n\n\n
\n Accurate, fast, and reliable multiclass classification of electroencephalography (EEG) signals is a challenging task towards the development of motor imagery brain-computer interface (MI-BCI) systems. We propose enhancements to different feature extractors, along with a support vector machine (SVM) classifier, to simultaneously improve classification accuracy and execution time during training and testing. We focus on the well-known common spatial pattern (CSP) and Riemannian covariance methods, and significantly extend these two feature extractors to multiscale temporal and spectral cases. The multiscale CSP features achieve ±b73.70±15.90% (mean± standard deviation across 9 subjects) classification accuracy that surpasses the state-of-the-art method [1], 70.6±14.70%, on the 4-class BCI competition IV-2a dataset. The Riemannian covariance features outperform the CSP by achieving 74.27±15.5% accuracy and executing 9x faster in training and 4x faster in testing. Using more temporal windows for Riemannian features results in 75.47±12.8% accuracy with 1.6x faster testing than CSP.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Sampling Rate Offset Compensation - an Overlap-Save Based Approach.\n \n \n \n \n\n\n \n Schmalenstroeer, J.; and Haeb-Umbach, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 499-503, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553379,\n  author = {J. Schmalenstroeer and R. Haeb-Umbach},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Sampling Rate Offset Compensation - an Overlap-Save Based Approach},\n  year = {2018},\n  pages = {499-503},\n  abstract = {Distributed sensor data acquisition usually encompasses data sampling by the individual devices, where each of them has its own oscillator driving the local sampling process, resulting in slightly different sampling rates at the individual sensor nodes. Nevertheless, for certain downstream signal processing tasks it is important to compensate even for small sampling rate offsets. Aligning the sampling rates of oscillators which differ only by a few parts-per-million, is, however, challenging and quite different from traditional multirate signal processing tasks. In this paper we propose to transfer a precise but computationally demanding time domain approach, inspired by the Nyquist-Shannon sampling theorem, to an efficient frequency domain implementation. To this end a buffer control is employed which compensates for sampling offsets which are multiples of the sampling period, while a digital filter, realized by the well-known Overlap-Save method, handles the fractional part of the sampling phase offset. With experiments on artificially misaligned data we investigate the parametrization, the efficiency, and the induced distortions of the proposed resampling method. It is shown that a favorable compromise between residual distortion and computational complexity is achieved, compared to other sampling rate offset compensation techniques.},\n  keywords = {computational complexity;data acquisition;digital filters;distributed sensors;frequency-domain analysis;signal sampling;time-domain analysis;computational complexity;digital filter;buffer control;overlap-save based approach;sampling rate offset compensation;frequency domain implementation;Nyquist-Shannon sampling theorem;time domain approach;multirate signal processing;artificially misaligned data;sampling rate offsets;downstream signal processing tasks;individual sensor nodes;local sampling process;data sampling;distributed sensor data acquisition;Frequency-domain analysis;Interpolation;Oscillators;Task analysis;Europe;Distortion;Overlap-Save method;sampling rate offset;resampling},\n  doi = {10.23919/EUSIPCO.2018.8553379},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570429285.pdf},\n}\n\n
\n
\n\n\n
\n Distributed sensor data acquisition usually encompasses data sampling by the individual devices, where each of them has its own oscillator driving the local sampling process, resulting in slightly different sampling rates at the individual sensor nodes. Nevertheless, for certain downstream signal processing tasks it is important to compensate even for small sampling rate offsets. Aligning the sampling rates of oscillators which differ only by a few parts-per-million, is, however, challenging and quite different from traditional multirate signal processing tasks. In this paper we propose to transfer a precise but computationally demanding time domain approach, inspired by the Nyquist-Shannon sampling theorem, to an efficient frequency domain implementation. To this end a buffer control is employed which compensates for sampling offsets which are multiples of the sampling period, while a digital filter, realized by the well-known Overlap-Save method, handles the fractional part of the sampling phase offset. With experiments on artificially misaligned data we investigate the parametrization, the efficiency, and the induced distortions of the proposed resampling method. It is shown that a favorable compromise between residual distortion and computational complexity is achieved, compared to other sampling rate offset compensation techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Optimized Binary Hashing Codes Generated by Siamese Neural Networks for Image Retrieval.\n \n \n \n \n\n\n \n Jose, A.; Horstmann, T.; and Ohm, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1487-1491, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OptimizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553380,\n  author = {A. Jose and T. Horstmann and J. Ohm},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimized Binary Hashing Codes Generated by Siamese Neural Networks for Image Retrieval},\n  year = {2018},\n  pages = {1487-1491},\n  abstract = {In this paper, we use a Siamese Neural Network based hashing method for generating binary codes with certain properties. The training architecture takes a pair of images as input. The loss function trains the network so that similar images are mapped to similar binary codes and dissimilar images to different binary codes. We add additional constraints in form of loss functions that enforce certain properties on the binary codes. The main motivation of incorporating the first constraint is maximization of entropy by generating binary codes with the same number of 1s and Os. The second constraint minimizes the mutual information between binary codes by generating orthogonal binary codes for dissimilar images. For this, we introduce orthogonality criterion for binary codes consisting of the binary values 0 and 1. Furthermore, we evaluate the properties such as mutual information and entropy of the binary codes generated with the additional constraints. We also analyze the influence of different bit sizes on those properties. The retrieval performance is evaluated by measuring Mean Average Precision (MAP) values and the results are compared with other state-of-the-art approaches.},\n  keywords = {binary codes;entropy;file organisation;image coding;image retrieval;learning (artificial intelligence);neural nets;optimisation;Siamese Neural networks;similar binary codes;dissimilar images;orthogonal binary codes;optimized binary hashing codes;entropy maximization;mutual information;image retrieval;Binary codes;Training;Entropy;Mutual information;Neural networks;Image retrieval;Europe;Siamese Neural Networks;Binary Hashing;Image Retrieval;Code Property Training;Information Theoretic Criteria},\n  doi = {10.23919/EUSIPCO.2018.8553380},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439081.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we use a Siamese Neural Network based hashing method for generating binary codes with certain properties. The training architecture takes a pair of images as input. The loss function trains the network so that similar images are mapped to similar binary codes and dissimilar images to different binary codes. We add additional constraints in form of loss functions that enforce certain properties on the binary codes. The main motivation of incorporating the first constraint is maximization of entropy by generating binary codes with the same number of 1s and Os. The second constraint minimizes the mutual information between binary codes by generating orthogonal binary codes for dissimilar images. For this, we introduce orthogonality criterion for binary codes consisting of the binary values 0 and 1. Furthermore, we evaluate the properties such as mutual information and entropy of the binary codes generated with the additional constraints. We also analyze the influence of different bit sizes on those properties. The retrieval performance is evaluated by measuring Mean Average Precision (MAP) values and the results are compared with other state-of-the-art approaches.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Kernel Adaptive Hammerstein Filter.\n \n \n \n \n\n\n \n Zheng, Y.; Dong, J.; Ma, W.; and Chen, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 504-508, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"KernelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553381,\n  author = {Y. Zheng and J. Dong and W. Ma and B. Chen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Kernel Adaptive Hammerstein Filter},\n  year = {2018},\n  pages = {504-508},\n  abstract = {To identify Hammerstein systems, a variety of Hammerstein filters have been proposed. However, most of them assume the nonlinear part in Hammerstein systems to be polynomial in the process of modeling, which restricts their applicability in many practical situations. In this paper, a simple kernel adaptive filter (KAF) called kernel least mean square (KLMS) combined with coherence criterion (CC) is used to approximate the nonlinear part of a Hammerstein system, resulting in the kernel adaptive Hammerstein filter (KAHF). The KAHF can identify various Hammerstein systems well without any prior knowledge of nonlinear part. Simulation results confirm the desirable performance of the new method.},\n  keywords = {adaptive filters;least mean squares methods;polynomials;Hammerstein systems;coherence criterion;kernel least mean square;simple kernel adaptive filter;Hammerstein filters;kernel adaptive Hammerstein filter;Kernel;Adaptation models;Dictionaries;Adaptive systems;Testing;Finite impulse response filters;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553381},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436912.pdf},\n}\n\n
\n
\n\n\n
\n To identify Hammerstein systems, a variety of Hammerstein filters have been proposed. However, most of them assume the nonlinear part in Hammerstein systems to be polynomial in the process of modeling, which restricts their applicability in many practical situations. In this paper, a simple kernel adaptive filter (KAF) called kernel least mean square (KLMS) combined with coherence criterion (CC) is used to approximate the nonlinear part of a Hammerstein system, resulting in the kernel adaptive Hammerstein filter (KAHF). The KAHF can identify various Hammerstein systems well without any prior knowledge of nonlinear part. Simulation results confirm the desirable performance of the new method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Use of Topological Data Analysis in Motor Intention Based Brain-Computer Interfaces.\n \n \n \n \n\n\n \n Altindis, F.; Yilmaz, B.; Borisenok, S.; and Icoz, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1695-1699, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"UsePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553382,\n  author = {F. Altindis and B. Yilmaz and S. Borisenok and K. Icoz},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Use of Topological Data Analysis in Motor Intention Based Brain-Computer Interfaces},\n  year = {2018},\n  pages = {1695-1699},\n  abstract = {This study aims to investigate the use of topological data analysis in electroencephalography (EEG) based on brain-computer interface (BCI) applications. Our study focused on extracting topological features of EEG signals obtained from the motor cortex area of the brain. EEG signals from 8 subjects were used for forming data point clouds with a real-time simulation scenario and then each cloud was processed with JPlex toolbox in order to find out corresponding Betti numbers. These numbers represent the topological structure of the point data cloud related to the persistent homologies, which differ for different motor activity tasks. The estimated Betti numbers has been used as features in k-NN classifier to discriminate left or right hand motor intentions.},\n  keywords = {brain-computer interfaces;data analysis;electroencephalography;feature extraction;medical signal processing;nearest neighbour methods;neurophysiology;signal classification;brain-computer interface applications;topological feature extraction;Betti numbers;motor activity tasks;motor intention based brain-computer interfaces;electroencephalography;data point clouds;real-time simulation scenario;JPlex toolbox;k-NN classifier;hand motor intentions;topological structure;motor cortex area;EEG signals;topological data analysis;Electroencephalography;Three-dimensional displays;Data analysis;Electrodes;Signal processing;Feature extraction;Shape;EEG;brain-computer interfaces;topological data analysis;motor intention waves;JPlex},\n  doi = {10.23919/EUSIPCO.2018.8553382},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438034.pdf},\n}\n\n
\n
\n\n\n
\n This study aims to investigate the use of topological data analysis in electroencephalography (EEG) based on brain-computer interface (BCI) applications. Our study focused on extracting topological features of EEG signals obtained from the motor cortex area of the brain. EEG signals from 8 subjects were used for forming data point clouds with a real-time simulation scenario and then each cloud was processed with JPlex toolbox in order to find out corresponding Betti numbers. These numbers represent the topological structure of the point data cloud related to the persistent homologies, which differ for different motor activity tasks. The estimated Betti numbers has been used as features in k-NN classifier to discriminate left or right hand motor intentions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Trace Lasso Regularized Ll-norm Graph Cut for Highly Correlated Noisy Hyperspectral Image.\n \n \n \n\n\n \n Mohanty, R.; Happy, S. L.; Suthar, N.; and Routray, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2220-2224, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553383,\n  author = {R. Mohanty and S. L. Happy and N. Suthar and A. Routray},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Trace Lasso Regularized Ll-norm Graph Cut for Highly Correlated Noisy Hyperspectral Image},\n  year = {2018},\n  pages = {2220-2224},\n  abstract = {Ahstract-This work proposes an adaptive trace lasso regularized Ll-norm based graph cut method for dimensionality reduction of Hyperspectral images, called as `Trace Lasso-Ll Graph Cut' (TL-LIGC). The underlying idea of this method is to generate the optimal projection matrix by considering both the sparsity as well as the correlation of the data samples. The conventional L2-norm used in the objective function is sensitive to noise and outliers. Therefore, in this work Ll-norm is utilized as a robust alternative to L2-norm. Besides, for further improvement of the results, we use a penalty function of trace lasso with the LIGC method. It adaptively balances the L2-norm and Ll-norm simultaneously by considering the data correlation along with the sparsity. We obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using Ll-norm with trace lasso as the penalty. Furthermore, an iterative procedure for this TL-LIGC method is proposed to solve the optimization function. The effectiveness of this proposed method is evaluated on two benchmark HSI datasets.},\n  keywords = {data compression;graph theory;hyperspectral imaging;image denoising;image segmentation;iterative methods;matrix algebra;optimisation;Ll-norm based graph cut method;iterative procedure;trace lasso-Ll graph cut;noisy hyperspectral image;trace lasso regularized Ll-norm graph cut;hyperspectral images;adaptive trace lasso;optimization function;TL-LIGC method;data correlation;conventional L2-norm;optimal projection matrix;Correlation;Linear programming;Optimization;Dispersion;Iterative methods;Noise measurement;Hyperspectral sensors;Correlation;dimensionality reduction;graph cut;greedy method;hyperspectral classification;Ll-norm;sparsity;trace lasso},\n  doi = {10.23919/EUSIPCO.2018.8553383},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Ahstract-This work proposes an adaptive trace lasso regularized Ll-norm based graph cut method for dimensionality reduction of Hyperspectral images, called as `Trace Lasso-Ll Graph Cut' (TL-LIGC). The underlying idea of this method is to generate the optimal projection matrix by considering both the sparsity as well as the correlation of the data samples. The conventional L2-norm used in the objective function is sensitive to noise and outliers. Therefore, in this work Ll-norm is utilized as a robust alternative to L2-norm. Besides, for further improvement of the results, we use a penalty function of trace lasso with the LIGC method. It adaptively balances the L2-norm and Ll-norm simultaneously by considering the data correlation along with the sparsity. We obtain the optimal projection matrix by maximizing the ratio of between-class dispersion to within-class dispersion using Ll-norm with trace lasso as the penalty. Furthermore, an iterative procedure for this TL-LIGC method is proposed to solve the optimization function. The effectiveness of this proposed method is evaluated on two benchmark HSI datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Joint Graph Learning and Signal Recovery via Kalman Filter for Multivariate Auto-Regressive Processes.\n \n \n \n\n\n \n Ramezani-Mayiami, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 907-911, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553384,\n  author = {M. Ramezani-Mayiami},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Graph Learning and Signal Recovery via Kalman Filter for Multivariate Auto-Regressive Processes},\n  year = {2018},\n  pages = {907-911},\n  abstract = {In this paper, an adaptive Kalman filter algorithm is proposed for simultaneous graph topology learning and graph signal recovery from noisy time series. Each time series corresponds to one node of the graph and underlying graph edges express the causality among nodes. We assume that graph signals are generated via a multivariate auto-regressive processes (MAR), generated by an innovation noise and graph weight matrices. Then we relate the state transition matrix of Kalman filter to the graph weight matrices since both of them can play the role of signal propagation and transition. Our proposed Kalman filter for MAR processes, called KF-MAR, runs three main steps; prediction, update, and learn. In prediction and update steps, we fix the previously learned graph weight matrices and follow a regular Kalman algorithm for graph signal recovery. Then in the learning step, we use the last update of graph signal estimates and keep track of topology changes. Simulation results show that our proposed graph Kalman filter outperforms the available online algorithms for graph topology inference and also it can achieve the same performance of the batch method, when the number of observations increase.},\n  keywords = {adaptive Kalman filters;autoregressive processes;graph theory;matrix algebra;time series;joint graph learning;multivariate auto-regressive processes;adaptive Kalman filter algorithm;graph signal recovery;noisy time series;graph signals;signal propagation;MAR processes;learned graph weight matrices;regular Kalman algorithm;graph signal estimates;graph Kalman filter;graph topology inference;graph topology learning;innovation noise;state transition matrix;Time series analysis;Kalman filters;Noise measurement;Topology;Signal processing algorithms;Correlation;Graph signal processing;Kalman filter;Topology inference;Multivariate auto-regressive processes;Causal data network},\n  doi = {10.23919/EUSIPCO.2018.8553384},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n In this paper, an adaptive Kalman filter algorithm is proposed for simultaneous graph topology learning and graph signal recovery from noisy time series. Each time series corresponds to one node of the graph and underlying graph edges express the causality among nodes. We assume that graph signals are generated via a multivariate auto-regressive processes (MAR), generated by an innovation noise and graph weight matrices. Then we relate the state transition matrix of Kalman filter to the graph weight matrices since both of them can play the role of signal propagation and transition. Our proposed Kalman filter for MAR processes, called KF-MAR, runs three main steps; prediction, update, and learn. In prediction and update steps, we fix the previously learned graph weight matrices and follow a regular Kalman algorithm for graph signal recovery. Then in the learning step, we use the last update of graph signal estimates and keep track of topology changes. Simulation results show that our proposed graph Kalman filter outperforms the available online algorithms for graph topology inference and also it can achieve the same performance of the batch method, when the number of observations increase.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Acoustic Scene Analysis Using Partially Connected Microphones Based on Graph Cepstrum.\n \n \n \n \n\n\n \n Imoto, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2439-2443, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AcousticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553385,\n  author = {K. Imoto},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Acoustic Scene Analysis Using Partially Connected Microphones Based on Graph Cepstrum},\n  year = {2018},\n  pages = {2439-2443},\n  abstract = {In this paper, we propose an effective and robust method for acoustic scene analysis based on spatial information extracted from partially synchronized and/or closely located distributed microphones. In the proposed method, to extract spatial information from distributed microphones while taking into account whether any pairs of microphones are synchronized and/or closely located, we derive a new cepstrum feature utilizing a graph-based basis transformation. Specifically, in the proposed graph-based cepstrum, the logarithm of the amplitude in a multichannel observation is converted to a feature vector by an inverse graph Fourier transform, which can consider whether any pair of microphones is connected. Our experimental results indicate that the proposed graph-based cepstrum effectively extracts spatial information with consideration of the microphone connections. Moreover, the results show that the proposed method more robustly classifies acoustic scenes than conventional spatial features when the observed sounds have a large synchronization mismatch between partially synchronized microphone groups.},\n  keywords = {acoustic signal processing;audio signal processing;feature extraction;Fourier transforms;graph theory;microphone arrays;microphones;time-frequency analysis;vectors;acoustic scene analysis;spatial information;distributed microphones;graph-based basis transformation;graph-based cepstrum;inverse graph Fourier transform;conventional spatial features;partially synchronized microphone groups;partially connected microphones;graph cepstrum;Cepstrum;Feature extraction;Data mining;Synchronization;Microphone arrays},\n  doi = {10.23919/EUSIPCO.2018.8553385},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434261.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an effective and robust method for acoustic scene analysis based on spatial information extracted from partially synchronized and/or closely located distributed microphones. In the proposed method, to extract spatial information from distributed microphones while taking into account whether any pairs of microphones are synchronized and/or closely located, we derive a new cepstrum feature utilizing a graph-based basis transformation. Specifically, in the proposed graph-based cepstrum, the logarithm of the amplitude in a multichannel observation is converted to a feature vector by an inverse graph Fourier transform, which can consider whether any pair of microphones is connected. Our experimental results indicate that the proposed graph-based cepstrum effectively extracts spatial information with consideration of the microphone connections. Moreover, the results show that the proposed method more robustly classifies acoustic scenes than conventional spatial features when the observed sounds have a large synchronization mismatch between partially synchronized microphone groups.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modelling of Sound Events with Hidden Imbalances Based on Clustering and Separate Sub-Dictionary Learning.\n \n \n \n \n\n\n \n Narisetty, C.; Komatsu, T.; and Kondo, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 847-851, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553387,\n  author = {C. Narisetty and T. Komatsu and R. Kondo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Modelling of Sound Events with Hidden Imbalances Based on Clustering and Separate Sub-Dictionary Learning},\n  year = {2018},\n  pages = {847-851},\n  abstract = {This paper proposes an effective modelling of sound event spectra with a hidden data-size-imbalance, for improved Acoustic Event Detection (AED). The proposed method models each event as an aggregated representation of a few latent factors, while conventional approaches try to find acoustic elements directly from the event spectra. In the method, all the latent factors across all events are assigned comparable importance and complexity to overcome the hidden imbalance of data-sizes in event spectra. To extract latent factors in each event, the proposed method employs clustering and performs non-negative matrix factorization to each latent factor, and learns its acoustic elements as a sub-dictionary. Separate sub-dictionary learning effectively models the acoustic elements with limited data-sizes and avoids over-fitting due to hidden imbalances in training data. For the task of polyphonic sound event detection from DCASE 2013 challenge, an AED based on the proposed modelling achieves a detection F-measure of 46.5%, a significant improvement of more than 19% as compared to the existing state-of-the-art methods.},\n  keywords = {acoustic signal detection;acoustic signal processing;learning (artificial intelligence);matrix decomposition;pattern clustering;sound event spectra;hidden data-size-imbalance;latent factor;acoustic elements;polyphonic sound event detection;acoustic event detection;nonnegative matrix factorization;sub-dictionary learning;clustering;AED;overfitting;Dictionaries;Machine learning;Training;Event detection;Spectrogram;Mel frequency cepstral coefficient;Data-Size-Imbalance;Acoustic Event Detection;Non-Negative Matrix Factorization;Dictionary Learning},\n  doi = {10.23919/EUSIPCO.2018.8553387},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437797.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes an effective modelling of sound event spectra with a hidden data-size-imbalance, for improved Acoustic Event Detection (AED). The proposed method models each event as an aggregated representation of a few latent factors, while conventional approaches try to find acoustic elements directly from the event spectra. In the method, all the latent factors across all events are assigned comparable importance and complexity to overcome the hidden imbalance of data-sizes in event spectra. To extract latent factors in each event, the proposed method employs clustering and performs non-negative matrix factorization to each latent factor, and learns its acoustic elements as a sub-dictionary. Separate sub-dictionary learning effectively models the acoustic elements with limited data-sizes and avoids over-fitting due to hidden imbalances in training data. For the task of polyphonic sound event detection from DCASE 2013 challenge, an AED based on the proposed modelling achieves a detection F-measure of 46.5%, a significant improvement of more than 19% as compared to the existing state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Fast and Accurate Automated Pavement Crack Detection Algorithm.\n \n \n \n \n\n\n \n Chatterjee, A.; and Tsai, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2140-2144, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553388,\n  author = {A. Chatterjee and Y. Tsai},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Fast and Accurate Automated Pavement Crack Detection Algorithm},\n  year = {2018},\n  pages = {2140-2144},\n  abstract = {Over the last 20 years, several crack detection algorithms have been developed to implement safe and efficient automated road condition survey (ARCS) systems. Although the current state-of-the-art algorithms can achieve a high level of accuracy, their computation time makes them infeasible to implement in real-time without massive parallelization. This paper presents a fast and accurate crack detection algorithm. The algorithm consists of the following major steps: 1) Image preprocessing; 2) Preliminary crack segmentation to minimize false negatives; 3) Crack object generation and connection to remove false positives; and 4) Refinement of the crack segmentation through a minimal path search based procedure. The proposed algorithm achieves an overall score of 80 in the Crack Detection Algorithm Performance Evaluation System (CDA-PES). With a median processing time of 0.52 seconds for 0.65 megapixel images on a single CPU thread, this algorithm makes accurate, real-time processing viable. The research presented in this paper contributes towards more widespread adoption of safer and efficient automated road condition surveys.},\n  keywords = {crack detection;image classification;image segmentation;roads;search problems;real-time processing;minimal path search;crack object generation;image preprocessing;automated road condition surveys;automated pavement crack detection;safer road condition surveys;median processing time;minimal path search based procedure;crack segmentation;accurate crack detection algorithm;Signal processing algorithms;Image segmentation;Detection algorithms;Roads;Europe;Signal processing;Sensors},\n  doi = {10.23919/EUSIPCO.2018.8553388},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439781.pdf},\n}\n\n
\n
\n\n\n
\n Over the last 20 years, several crack detection algorithms have been developed to implement safe and efficient automated road condition survey (ARCS) systems. Although the current state-of-the-art algorithms can achieve a high level of accuracy, their computation time makes them infeasible to implement in real-time without massive parallelization. This paper presents a fast and accurate crack detection algorithm. The algorithm consists of the following major steps: 1) Image preprocessing; 2) Preliminary crack segmentation to minimize false negatives; 3) Crack object generation and connection to remove false positives; and 4) Refinement of the crack segmentation through a minimal path search based procedure. The proposed algorithm achieves an overall score of 80 in the Crack Detection Algorithm Performance Evaluation System (CDA-PES). With a median processing time of 0.52 seconds for 0.65 megapixel images on a single CPU thread, this algorithm makes accurate, real-time processing viable. The research presented in this paper contributes towards more widespread adoption of safer and efficient automated road condition surveys.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Knowledge-Aided Normalized Iterative Hard Thresholding Algorithms for Sparse Recovery.\n \n \n \n \n\n\n \n Jiang, Q.; de Lamare , R. C.; Zakharov, Y.; Li, S.; and He, X.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1965-1969, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Knowledge-AidedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553389,\n  author = {Q. Jiang and R. C. {de Lamare} and Y. Zakharov and S. Li and X. He},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Knowledge-Aided Normalized Iterative Hard Thresholding Algorithms for Sparse Recovery},\n  year = {2018},\n  pages = {1965-1969},\n  abstract = {This paper deals with the problem of sparse recovery often found in compressive sensing applications exploiting a priori knowledge. In particular, we present a knowledge-aided normalized iterative hard thresholding (KA-NIHT) algorithm that exploits information about the probabilities of nonzero entries. We also develop a strategy to update the probabilities using a recursive KA-NIHT (RKA-NIHT) algorithm, which results in improved recovery. Simulation results illustrate and compare the performance of the proposed and existing algorithms.},\n  keywords = {approximation theory;compressed sensing;iterative methods;probability;RKA-NIHT;knowledge-aided normalized iterative hard thresholding algorithms;sparse recovery;compressive sensing applications;recursive KA-NIHT algorithm;Signal processing algorithms;Matching pursuit algorithms;Europe;Signal processing;Compressed sensing;Iterative algorithms;Simulation;compressed sensing;iterative hard thresholding;prior information;probability estimation;sparse recovery},\n  doi = {10.23919/EUSIPCO.2018.8553389},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437058.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the problem of sparse recovery often found in compressive sensing applications exploiting a priori knowledge. In particular, we present a knowledge-aided normalized iterative hard thresholding (KA-NIHT) algorithm that exploits information about the probabilities of nonzero entries. We also develop a strategy to update the probabilities using a recursive KA-NIHT (RKA-NIHT) algorithm, which results in improved recovery. Simulation results illustrate and compare the performance of the proposed and existing algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-task Feature Learning for EEG-based Emotion Recognition Using Group Nonnegative Matrix Factorization.\n \n \n \n \n\n\n \n Hajlaoui, A.; Chetouani, M.; and Essid, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 91-95, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-taskPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553390,\n  author = {A. Hajlaoui and M. Chetouani and S. Essid},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-task Feature Learning for EEG-based Emotion Recognition Using Group Nonnegative Matrix Factorization},\n  year = {2018},\n  pages = {91-95},\n  abstract = {Electroencephalographic sensors have proven to be promising for emotion recognition. Our study focuses on the recognition of valence and arousal levels using such sensors. Usually, ad hoc features are extracted for such recognition tasks. In this paper, we rely on automatic feature learning techniques instead. Our main contribution is the use of Group Nonnegative Matrix Factorization in a multi-task fashion, where we exploit both valence and arousal labels to control valence-related and arousal-related feature learning. Applying this method on HCI MAHNOB and EMOEEG, two databases where emotions are elicited by means of audiovisual stimuli and performing binary inter-session classification of valence labels, we obtain significant improvement of valence classification Fl scores in comparison to baseline frequency-band power features computed on predefined frequency bands. The valence classification F1 score is improved from 0.56 to 0.69 in the case of HCI MAHNOB, and from 0.56 to 0.59 in the case of EMOEEG.},\n  keywords = {electroencephalography;emotion recognition;feature extraction;human computer interaction;learning (artificial intelligence);matrix decomposition;signal classification;support vector machines;multitask feature learning;EEG-based emotion recognition;Group Nonnegative Matrix Factorization;electroencephalographic sensors;arousal levels;ad hoc features;recognition tasks;automatic feature;multitask fashion;arousal labels;valence-related;HCI MAHNOB;audiovisual stimuli;performing binary inter-session classification;valence labels;valence classification Fl scores;baseline frequency-band power features;valence classification F1 score;Human computer interaction;Feature extraction;Task analysis;Electroencephalography;Brain modeling;Electrodes;Dictionaries;Electroencephalography;Valence;Arousal;Nonnegative Matrix Factorization;Group NMF;Common Spectral Patterns},\n  doi = {10.23919/EUSIPCO.2018.8553390},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435835.pdf},\n}\n\n
\n
\n\n\n
\n Electroencephalographic sensors have proven to be promising for emotion recognition. Our study focuses on the recognition of valence and arousal levels using such sensors. Usually, ad hoc features are extracted for such recognition tasks. In this paper, we rely on automatic feature learning techniques instead. Our main contribution is the use of Group Nonnegative Matrix Factorization in a multi-task fashion, where we exploit both valence and arousal labels to control valence-related and arousal-related feature learning. Applying this method on HCI MAHNOB and EMOEEG, two databases where emotions are elicited by means of audiovisual stimuli and performing binary inter-session classification of valence labels, we obtain significant improvement of valence classification Fl scores in comparison to baseline frequency-band power features computed on predefined frequency bands. The valence classification F1 score is improved from 0.56 to 0.69 in the case of HCI MAHNOB, and from 0.56 to 0.59 in the case of EMOEEG.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Resolution Reconstructions from Compressive Spectral Coded Projections.\n \n \n \n \n\n\n \n Correa, C. V.; Arguello, H.; and Arce, G. R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1995-1999, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-ResolutionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553391,\n  author = {C. V. Correa and H. Arguello and G. R. Arce},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-Resolution Reconstructions from Compressive Spectral Coded Projections},\n  year = {2018},\n  pages = {1995-1999},\n  abstract = {Compressive spectral coded projections are attained by an imaging detector as a spatial-spectral field traverses diverse optical elements such as a coded aperture and a dispersive element. Compressed sensing reconstruction algorithms are used to recover the underlying data cube at the resolution enabled by the captured projections. Such reconstructions, however, are computationally expensive because of the data dimensions. In this paper, a multi-resolution (MR) reconstruction approach is presented, such that several versions of the data cube can be recovered at different spatial resolutions, by employing gradient intensity maps. Simulations show that this approach overcomes interpolation results in up to 3dB of PSNR in noisy scenarios.},\n  keywords = {compressed sensing;image reconstruction;image resolution;interpolation;multiresolution reconstruction approach;data cube;multiresolution reconstructions;compressive spectral coded projections;spatial-spectral field traverses diverse optical elements;coded aperture;dispersive element;reconstruction algorithms;captured projections;spatial resolutions;PSNR;Image reconstruction;Spatial resolution;Optical imaging;Optical sensors;Compressive spectral imaging;Multi-resolution;Spectral Imaging;Compressed sensing},\n  doi = {10.23919/EUSIPCO.2018.8553391},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437352.pdf},\n}\n\n
\n
\n\n\n
\n Compressive spectral coded projections are attained by an imaging detector as a spatial-spectral field traverses diverse optical elements such as a coded aperture and a dispersive element. Compressed sensing reconstruction algorithms are used to recover the underlying data cube at the resolution enabled by the captured projections. Such reconstructions, however, are computationally expensive because of the data dimensions. In this paper, a multi-resolution (MR) reconstruction approach is presented, such that several versions of the data cube can be recovered at different spatial resolutions, by employing gradient intensity maps. Simulations show that this approach overcomes interpolation results in up to 3dB of PSNR in noisy scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Recursive Bayesian Weighted Instrumental Variable Estimator for 3D Bearings-Only TMA.\n \n \n \n \n\n\n \n Badriasl, L.; Arulampalam, S.; and Finn, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 276-280, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553392,\n  author = {L. Badriasl and S. Arulampalam and A. Finn},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Recursive Bayesian Weighted Instrumental Variable Estimator for 3D Bearings-Only TMA},\n  year = {2018},\n  pages = {276-280},\n  abstract = {In our previous work [17], we derived a novel Bayesian weighted instrumental variable (WIV) estimator for the three-dimensional bearings-only target motion analysis problem. While the proposed approach has the desirable characteristic of incorporating a priori information in the estimation process and is proven to be approximately asymptotically unbiased, this estimator has a batch structure which is generally not suitable for online processing of measurements in practical applications. Therefore, in this paper we develop a recursive Bayesian WIV, which also uses an adaptive selective angle measurement approach to increase its stability. Simulations show that the proposed estimator outperforms the compared Bayesian algorithms with similar computational complexity for poorly observable scenarios.},\n  keywords = {angular measurement;belief networks;computational complexity;motion estimation;recursive estimation;target tracking;recursive Bayesian WIV;adaptive selective angle measurement approach;Bayesian algorithms;three-dimensional bearings-only target motion analysis problem;3d bearings-only TMA;recursive Bayesian weighted instrumental variable estimator;Bayes methods;Time measurement;Australia;Noise measurement;Covariance matrices;Europe;Signal processing;Estimation theory;bearings-only target motion analysis;Bayesian estimation;pseudolinear estimator;instrumental variables},\n  doi = {10.23919/EUSIPCO.2018.8553392},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433048.pdf},\n}\n\n
\n
\n\n\n
\n In our previous work [17], we derived a novel Bayesian weighted instrumental variable (WIV) estimator for the three-dimensional bearings-only target motion analysis problem. While the proposed approach has the desirable characteristic of incorporating a priori information in the estimation process and is proven to be approximately asymptotically unbiased, this estimator has a batch structure which is generally not suitable for online processing of measurements in practical applications. Therefore, in this paper we develop a recursive Bayesian WIV, which also uses an adaptive selective angle measurement approach to increase its stability. Simulations show that the proposed estimator outperforms the compared Bayesian algorithms with similar computational complexity for poorly observable scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Wavelet-Based Classification of Transient Signals for Gravitational Wave Detectors.\n \n \n \n \n\n\n \n Cuoco, E.; Razzano, M.; and Utina, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2648-2652, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Wavelet-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553393,\n  author = {E. Cuoco and M. Razzano and A. Utina},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Wavelet-Based Classification of Transient Signals for Gravitational Wave Detectors},\n  year = {2018},\n  pages = {2648-2652},\n  abstract = {The detection of gravitational waves opened a new window on the cosmos. The Advanced LIGO and Advanced Virgo interferometers will probe a larger volume of Universe and discover new gravitational wave emitters. Characterizing these detectors is of primary importance in order to recognize the main sources of noise and optimize the sensitivity of the searches. Glitches are transient noise events that can impact the data quality of the interferometers and their classification is an important task for detector characterization. In this paper we present a classification method for short transient signals based on a Wavelet decomposition and de-noising and a classification of the extracted features based on XGBoost algorithm. Although the results show the accuracy is lower than that obtained with the use of deep learning, this method which extracts features while detecting signals in real time, can be configured as a fast classification system.},\n  keywords = {gravitational wave detectors;gravitational waves;light interferometers;wavelet transforms;Advanced LIGO;Advanced Virgo interferometers;XGBoost algorithm;cosmos;gravitational wave detectors;Wavelet-based classification;fast classification system;short transient signals;classification method;detector characterization;data quality;transient noise events;gravitational wave emitters;Wavelet transforms;Transient analysis;Detectors;Interferometers;Pipelines;Sensitivity;Feature extraction;signal processing;wavelet decomposition;machine learning classification},\n  doi = {10.23919/EUSIPCO.2018.8553393},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436751.pdf},\n}\n\n
\n
\n\n\n
\n The detection of gravitational waves opened a new window on the cosmos. The Advanced LIGO and Advanced Virgo interferometers will probe a larger volume of Universe and discover new gravitational wave emitters. Characterizing these detectors is of primary importance in order to recognize the main sources of noise and optimize the sensitivity of the searches. Glitches are transient noise events that can impact the data quality of the interferometers and their classification is an important task for detector characterization. In this paper we present a classification method for short transient signals based on a Wavelet decomposition and de-noising and a classification of the extracted features based on XGBoost algorithm. Although the results show the accuracy is lower than that obtained with the use of deep learning, this method which extracts features while detecting signals in real time, can be configured as a fast classification system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Improved CSI Based Device Free Indoor Localization Using Machine Learning Based Classification Approach.\n \n \n \n \n\n\n \n Sanam, T. F.; and Godrich, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2390-2394, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553394,\n  author = {T. F. Sanam and H. Godrich},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Improved CSI Based Device Free Indoor Localization Using Machine Learning Based Classification Approach},\n  year = {2018},\n  pages = {2390-2394},\n  abstract = {Indoor positioning system (IPS) has shown great potentials with the growth of context-aware computing. Typical IPS requires the tracked subject to carry a physical device. In this study, we present MaLDIP, a novel, machine learning based, device free technique for indoor positioning. To design the device free setting, we exploited the Channel State Information (CSI) obtained from Multiple Input Multiple Output Orthogonal Frequency-Division Multiplexing (MIMO-OFDM). The system works by utilizing frequency diversity and spatial diversity properties of CSI at target location by correlating the impact of human presence to certain changes on the received signal features. However, accurate modeling of the effect of a subject on fine grained CSI is challenging due to the presence of multipaths. We propose a novel subcarrier selection method to remove the multipath affected subcarriers to improve the performance of localization. We select the most location-dependent features from channel response based upon the wireless propagation model and propose to apply a machine learning based approach for location estimation, where the localization problem is shifted to a cell identification problem using the Support Vector Machine (SVM) based classifier. Experimental results show that MaLDIP can estimate location in a passive device free setting with a high accuracy using MIMO-OFDM system.},\n  keywords = {cellular radio;indoor navigation;learning (artificial intelligence);MIMO communication;OFDM modulation;radio direction-finding;radiowave propagation;support vector machines;telecommunication computing;wireless channels;indoor positioning system;context-aware computing;MaLDIP;spatial diversity properties;subcarrier selection method;location-dependent features;wireless propagation model;location estimation;passive device free setting;MIMO-OFDM system;IPS;channel state information;support vector machine based classifier;improved fine grained CSI based device free indoor localization;machine learning based classification approach;multiple input multiple output orthogonal frequency-division multiplexing system;frequency diversity properties;cell identification problem;SVM based classifier;Wireless communication;Performance evaluation;Fading channels;MIMO communication;Receiving antennas;Estimation},\n  doi = {10.23919/EUSIPCO.2018.8553394},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436803.pdf},\n}\n\n
\n
\n\n\n
\n Indoor positioning system (IPS) has shown great potentials with the growth of context-aware computing. Typical IPS requires the tracked subject to carry a physical device. In this study, we present MaLDIP, a novel, machine learning based, device free technique for indoor positioning. To design the device free setting, we exploited the Channel State Information (CSI) obtained from Multiple Input Multiple Output Orthogonal Frequency-Division Multiplexing (MIMO-OFDM). The system works by utilizing frequency diversity and spatial diversity properties of CSI at target location by correlating the impact of human presence to certain changes on the received signal features. However, accurate modeling of the effect of a subject on fine grained CSI is challenging due to the presence of multipaths. We propose a novel subcarrier selection method to remove the multipath affected subcarriers to improve the performance of localization. We select the most location-dependent features from channel response based upon the wireless propagation model and propose to apply a machine learning based approach for location estimation, where the localization problem is shifted to a cell identification problem using the Support Vector Machine (SVM) based classifier. Experimental results show that MaLDIP can estimate location in a passive device free setting with a high accuracy using MIMO-OFDM system.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sampling Phase Estimation in Underwater PPM Fractionally Sampled Equalization.\n \n \n \n \n\n\n \n Scarano, G.; Petroni, A.; Cusani, R.; and Biagi, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 912-916, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SamplingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553395,\n  author = {G. Scarano and A. Petroni and R. Cusani and M. Biagi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sampling Phase Estimation in Underwater PPM Fractionally Sampled Equalization},\n  year = {2018},\n  pages = {912-916},\n  abstract = {A new blind estimator of the sampling phase is proposed to support fractionally spaced equalization in underwater digital links employing pulse position modulation. Stemming from the relationship between the “spikiness” of the channel impulse response and the deviation from Gaussianity of the received signal, the sampling phase is estimated by exploiting non-Gaussianity measures offered by nonlinear statistics. In particular, the fourth order (kurtosis) and the first order nonlinear sample moments are considered and the resulting receiver performance is analyzed.},\n  keywords = {equalisers;Gaussian processes;phase estimation;pulse position modulation;signal sampling;statistical analysis;underwater acoustic communication;wireless channels;sampling phase estimation;blind estimator;fractionally spaced equalization;underwater digital links;pulse position modulation;channel impulse response;order nonlinear sample moments;underwater PPM fractionally sampled equalization;nonGaussianity measures;nonlinear statistics;first order nonlinear sample moments;fourth order nonlinear sample moments;kurtosis;Bandwidth;Receivers;Decision feedback equalizers;Estimation;Europe;Multipath channels;Underwater;PPM;Fractional Sampling;Channel Equalization},\n  doi = {10.23919/EUSIPCO.2018.8553395},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438030.pdf},\n}\n\n
\n
\n\n\n
\n A new blind estimator of the sampling phase is proposed to support fractionally spaced equalization in underwater digital links employing pulse position modulation. Stemming from the relationship between the “spikiness” of the channel impulse response and the deviation from Gaussianity of the received signal, the sampling phase is estimated by exploiting non-Gaussianity measures offered by nonlinear statistics. In particular, the fourth order (kurtosis) and the first order nonlinear sample moments are considered and the resulting receiver performance is analyzed.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Generative adversarial network-based approach to signal reconstruction from magnitude spectrogram.\n \n \n \n \n\n\n \n Oyamada, K.; Kameoka, H.; Kaneko, T.; Tanaka, K.; Hojo, N.; and Ando, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2514-2518, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GenerativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553396,\n  author = {K. Oyamada and H. Kameoka and T. Kaneko and K. Tanaka and N. Hojo and H. Ando},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Generative adversarial network-based approach to signal reconstruction from magnitude spectrogram},\n  year = {2018},\n  pages = {2514-2518},\n  abstract = {In this paper, we address the problem of reconstructing a time-domain signal (or a phase spectrogram) solely from a magnitude spectrogram. Since magnitude spectrograms do not contain phase information, we must restore or infer phase information to reconstruct a time-domain signal. One widely used approach for dealing with the signal reconstruction problem was proposed by Griffin and Lim. This method usually requires many iterations for the signal reconstruction process and depending on the inputs, it does not always produce high-quality audio signals. To overcome these shortcomings, we apply a learning-based approach to the signal reconstruction problem by modeling the signal reconstruction process using a deep neural network and training it using the idea of a generative adversarial network. Experimental evaluations revealed that our method was able to reconstruct signals faster with higher quality than the Griffin-Lim method.},\n  keywords = {learning (artificial intelligence);neural nets;signal reconstruction;time-domain signal;signal reconstruction process;high-quality audio signals;learning-based approach;generative adversarial network-based approach;magnitude spectrogram;phase spectrogram;phase information;deep neural network;Spectrogram;Generators;Generative adversarial networks;Signal reconstruction;Time-domain analysis;Training;Gallium nitride;phase reconstruction;deep neural networks;generative adversarial networks},\n  doi = {10.23919/EUSIPCO.2018.8553396},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438527.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address the problem of reconstructing a time-domain signal (or a phase spectrogram) solely from a magnitude spectrogram. Since magnitude spectrograms do not contain phase information, we must restore or infer phase information to reconstruct a time-domain signal. One widely used approach for dealing with the signal reconstruction problem was proposed by Griffin and Lim. This method usually requires many iterations for the signal reconstruction process and depending on the inputs, it does not always produce high-quality audio signals. To overcome these shortcomings, we apply a learning-based approach to the signal reconstruction problem by modeling the signal reconstruction process using a deep neural network and training it using the idea of a generative adversarial network. Experimental evaluations revealed that our method was able to reconstruct signals faster with higher quality than the Griffin-Lim method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n On the possibility to achieve 6-DoF for 360 video using divergent multi- view content.\n \n \n \n\n\n \n Ray, B.; Jung, J.; and Larabi, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 211-215, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553397,\n  author = {B. Ray and J. Jung and M. Larabi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On the possibility to achieve 6-DoF for 360 video using divergent multi- view content},\n  year = {2018},\n  pages = {211-215},\n  abstract = {With the rapid emergence of various 360 video capturing devices and head-mounted displays, providing immersive experience using 360 videos is becoming a topic of paramount interest. The current techniques face motion sickness issues, resulting in low quality of experience. One of the reason is that they do not take advantage of the parallax between the divergent views, which may be helpful to provide the correct view according to the user's head motion. In this paper, we propose to get rid of the classical ERP representation, and to synthesize arbitrary views using different divergent views together with their corresponding depths. Thus, we can exploit the parallax between the divergent views. In this context, we assess the feasibility of the depth estimation and the view synthesis using state-of-the-art techniques. Simulation results confirmed the feasibility of such a proposal, in addition to possibility to achieve sufficient visual quality for a head motion up to 0.1m from the rig, when using generated depth map for view synthesis.},\n  keywords = {helmet mounted displays;stereo image processing;video signal processing;6-DoF;divergent multi view content;360 video capturing devices;head-mounted displays;parallax;classical ERP representation;arbitrary views;view synthesis;user head motion;depth estimation;face motion sickness;visual quality;Cameras;Estimation;Visualization;Transform coding;Software;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553397},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n With the rapid emergence of various 360 video capturing devices and head-mounted displays, providing immersive experience using 360 videos is becoming a topic of paramount interest. The current techniques face motion sickness issues, resulting in low quality of experience. One of the reason is that they do not take advantage of the parallax between the divergent views, which may be helpful to provide the correct view according to the user's head motion. In this paper, we propose to get rid of the classical ERP representation, and to synthesize arbitrary views using different divergent views together with their corresponding depths. Thus, we can exploit the parallax between the divergent views. In this context, we assess the feasibility of the depth estimation and the view synthesis using state-of-the-art techniques. Simulation results confirmed the feasibility of such a proposal, in addition to possibility to achieve sufficient visual quality for a head motion up to 0.1m from the rig, when using generated depth map for view synthesis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Active Content Fingerprinting Using Latent Data Representation, Extractor and Reconstructor.\n \n \n \n \n\n\n \n Kostadinov, D.; Voloshynovskiy, S.; and Ferdowsi, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1417-1421, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ActivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553399,\n  author = {D. Kostadinov and S. Voloshynovskiy and S. Ferdowsi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Active Content Fingerprinting Using Latent Data Representation, Extractor and Reconstructor},\n  year = {2018},\n  pages = {1417-1421},\n  abstract = {This paper introduces a concept of Active Content Fingerprinting based on a Latent data Representation (aCFP-LR). The idea is to represent the data content by a constrained redundant description. The target is to estimate latent representation such that: (i) after applying a reconstructor function the result is close to the original data and (ii) after using an extraction function the resulting features are robust. A general problem formulation is proposed for aCFP-LR with an extractor-reconstructor pair of constraints. One particular case is considered under linear extractor (generator) and linear reconstructor (modulator) where a reduction is shown to a constrained projection problem. Evaluation by numerical experiments is given using local image patches, extracted from publicly available data sets. Advantages and state-of-the-art performance is demonstrated under additive white Gaussian noise (AWGN), lossy JPEG compression and projective geometrical transform distortions.},\n  keywords = {AWGN;data structures;feature extraction;fingerprint identification;image coding;image reconstruction;reconstructor function;extraction function;constrained projection problem;publicly available data sets;data content;constrained redundant description;problem formulation;latent data representation;active content fingerprinting;aCFP-LR;extractor-reconstructor pair;linear extractor;linear reconstructor;local image patches;additive white Gaussian noise;lossy JPEG compression;projective geometrical transform distortions;AWGN;Modulation;Feature extraction;Distortion;Image reconstruction;Fingerprint recognition;Data mining;Generators;active content fingerprint;latent representation;extractor;reconstructor;redundancy;robustness},\n  doi = {10.23919/EUSIPCO.2018.8553399},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438069.pdf},\n}\n\n
\n
\n\n\n
\n This paper introduces a concept of Active Content Fingerprinting based on a Latent data Representation (aCFP-LR). The idea is to represent the data content by a constrained redundant description. The target is to estimate latent representation such that: (i) after applying a reconstructor function the result is close to the original data and (ii) after using an extraction function the resulting features are robust. A general problem formulation is proposed for aCFP-LR with an extractor-reconstructor pair of constraints. One particular case is considered under linear extractor (generator) and linear reconstructor (modulator) where a reduction is shown to a constrained projection problem. Evaluation by numerical experiments is given using local image patches, extracted from publicly available data sets. Advantages and state-of-the-art performance is demonstrated under additive white Gaussian noise (AWGN), lossy JPEG compression and projective geometrical transform distortions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n ECA Filter Effects on Ground Clutter Statistics in DVB- T Based Passive Radar.\n \n \n \n\n\n \n del-Rey-Maestre , N.; Jarabo-Amores, M.; Bárcena-Humanes, J.; Mata-Moya, D.; and Gómez-del-Hoyo, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1217-1221, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553400,\n  author = {N. del-Rey-Maestre and M. Jarabo-Amores and J. Bárcena-Humanes and D. Mata-Moya and P. Gómez-del-Hoyo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {ECA Filter Effects on Ground Clutter Statistics in DVB- T Based Passive Radar},\n  year = {2018},\n  pages = {1217-1221},\n  abstract = {This paper tackles the analysis of the Extensive CAncellation (ECA) filter effects in the statistical characterization of ground radar clutter. Real radar databases acquired by a DVB-T based Passive Radar (PR) system were characterized considering the Cross-Ambiguity Function output as the observation space. Goodness-of-fit tests were applied, and skewness and kurtosis values were estimated. The Gaussian model was proved to be suitable for high Doppler shifts. However, Mixture of Gammas and Gaussians distributions were proposed to model the intensity and the in-phase and quadrature components, respectively, of the region centred in the zero Doppler line. These results prove that, although the ECA filter rejects the most part of the interference components, a non-homogeneous characterization was required for Doppler shifts close to zero, where ground clutter effects concentrate. The proposed theoretical distributions will be useful for the formulation of optimum detectors based on the Neyman-Pearson criterion, improving the detection performance of PR systems.},\n  keywords = {digital video broadcasting;Doppler shift;filtering theory;Gaussian distribution;passive radar;radar clutter;radar detection;radar signal processing;in-phase components;skewness values;DVB-T based Passive Radar system;theoretical distributions;Neyman-Pearson criterion;detection performance;interference components;kurtosis values;goodness-of-fit tests;observation space;Cross-Ambiguity Function output;radar databases;ground radar clutter;statistical characterization;Extensive CAncellation;ground clutter statistics;ECA filter effects;PR systems;ground clutter effects;nonhomogeneous characterization;zero Doppler line;quadrature components;high Doppler shifts;Gaussian model;Clutter;Doppler effect;Surveillance;Passive radar;Phase change materials;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553400},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper tackles the analysis of the Extensive CAncellation (ECA) filter effects in the statistical characterization of ground radar clutter. Real radar databases acquired by a DVB-T based Passive Radar (PR) system were characterized considering the Cross-Ambiguity Function output as the observation space. Goodness-of-fit tests were applied, and skewness and kurtosis values were estimated. The Gaussian model was proved to be suitable for high Doppler shifts. However, Mixture of Gammas and Gaussians distributions were proposed to model the intensity and the in-phase and quadrature components, respectively, of the region centred in the zero Doppler line. These results prove that, although the ECA filter rejects the most part of the interference components, a non-homogeneous characterization was required for Doppler shifts close to zero, where ground clutter effects concentrate. The proposed theoretical distributions will be useful for the formulation of optimum detectors based on the Neyman-Pearson criterion, improving the detection performance of PR systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rebroadcast Attacks: Defenses, Reattacks, and Redefenses.\n \n \n \n \n\n\n \n Fan, W.; Agarwal, S.; and Farid, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 942-946, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RebroadcastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553401,\n  author = {W. Fan and S. Agarwal and H. Farid},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Rebroadcast Attacks: Defenses, Reattacks, and Redefenses},\n  year = {2018},\n  pages = {942-946},\n  abstract = {A rebroadcast attack, in which an image is manipulated and then re-imaged, is a simple attack against forensic techniques designed to distinguish original from edited images. Various techniques have been developed to detect rebroadcast attacks. These forensic analyses, however, face new threats from sophisticated machine learning techniques that are designed to modify images to circumvent detection. We describe a framework to analyze the resilience of rebroadcast detection to adversarial attacks. We describe the impact of repeated attacks and defenses on the efficacy of detecting rebroadcast content. This basic framework may be applicable to understanding the resilience of a variety of forensic techniques.},\n  keywords = {cryptography;image forensics;learning (artificial intelligence);rebroadcast attack;rebroadcast detection;adversarial attacks;repeated attacks;rebroadcast content;forensic techniques;edited images;sophisticated machine learning techniques;Detectors;Bars;Transform coding;Cameras;Forensics;Resilience;Training},\n  doi = {10.23919/EUSIPCO.2018.8553401},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436250.pdf},\n}\n\n
\n
\n\n\n
\n A rebroadcast attack, in which an image is manipulated and then re-imaged, is a simple attack against forensic techniques designed to distinguish original from edited images. Various techniques have been developed to detect rebroadcast attacks. These forensic analyses, however, face new threats from sophisticated machine learning techniques that are designed to modify images to circumvent detection. We describe a framework to analyze the resilience of rebroadcast detection to adversarial attacks. We describe the impact of repeated attacks and defenses on the efficacy of detecting rebroadcast content. This basic framework may be applicable to understanding the resilience of a variety of forensic techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-Time DCT Learning-based Reconstruction of Neural Signals.\n \n \n \n \n\n\n \n Mahabadi, R. K.; Aprile, C.; and Cevher, V.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1925-1929, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Real-TimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553402,\n  author = {R. K. Mahabadi and C. Aprile and V. Cevher},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-Time DCT Learning-based Reconstruction of Neural Signals},\n  year = {2018},\n  pages = {1925-1929},\n  abstract = {Wearable and implantable body sensor network systems are one of the key technologies for continuous monitoring of patient's vital health status such as temperature and blood pressure, and brain activity. Such devices are critical for early detection of emergency conditions of people at risk and offer a wide range of medical facilities and services. Despite continuous advances in the field of wearable and implantable medical devices, it still faces major challenges such as energy-efficient and low-latency reconstruction of signals. This work presents a power-efficient real-time system for recovering neural signals. Such systems are of high interest for implantable medical devices, where reconstruction of neural signals needs to be done in realtime with low energy consumption. We combine a deep network and DCT-Iearning based compressive sensing framework to propose a novel and efficient compression-decompression system for neural signals. We compare our approach with state-of-the-art compressive sensing methods and show that it achieves superior reconstruction performance with significantly less computing time.},\n  keywords = {biomedical electronics;body sensor networks;compressed sensing;data compression;discrete cosine transforms;medical signal processing;patient monitoring;prosthetics;signal reconstruction;low energy consumption;neural signals;wearable body sensor network systems;implantable body sensor network systems;continuous monitoring;patient;blood pressure;medical facilities;continuous advances;implantable medical devices;low-latency reconstruction;power-efficient real-time system;reconstruction performance;energy-efficiency;DCT learning-based reconstruction;compression-decompression system;Compressed sensing;Training;Decoding;Real-time systems;Discrete cosine transforms;Monitoring;Neural signals;neural network;compressive sensing;learning-based signal processing;low-power;signal recovery},\n  doi = {10.23919/EUSIPCO.2018.8553402},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436259.pdf},\n}\n\n
\n
\n\n\n
\n Wearable and implantable body sensor network systems are one of the key technologies for continuous monitoring of patient's vital health status such as temperature and blood pressure, and brain activity. Such devices are critical for early detection of emergency conditions of people at risk and offer a wide range of medical facilities and services. Despite continuous advances in the field of wearable and implantable medical devices, it still faces major challenges such as energy-efficient and low-latency reconstruction of signals. This work presents a power-efficient real-time system for recovering neural signals. Such systems are of high interest for implantable medical devices, where reconstruction of neural signals needs to be done in realtime with low energy consumption. We combine a deep network and DCT-Iearning based compressive sensing framework to propose a novel and efficient compression-decompression system for neural signals. We compare our approach with state-of-the-art compressive sensing methods and show that it achieves superior reconstruction performance with significantly less computing time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Operational Rate-Constrained Beamforming in Binaural Hearing Aids.\n \n \n \n \n\n\n \n Amini, J.; Hendriks, R. C.; Heusdens, R.; Guo, M.; and Jensen, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2504-2508, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OperationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553403,\n  author = {J. Amini and R. C. Hendriks and R. Heusdens and M. Guo and J. Jensen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Operational Rate-Constrained Beamforming in Binaural Hearing Aids},\n  year = {2018},\n  pages = {2504-2508},\n  abstract = {Modern binaural hearing aids (HAs) can collaborate wirelessly with each other as well as with other assistive (wireless) devices. This enables multi-microphone noise reduction over small wireless acoustic sensor networks (WASNs) to increase the intelligibility under adverse conditions. In this work, we assume one of the HAs to serve as the fusion center (FC). The optimal beamforming strategy for processing the received data at the FC depends on the acoustic scene and physical constraints (e.g., the bit-rate for transmission to the FC), and might be frequency dependent. Selection of the optimal beamforming strategy, while satisfying rate constraints on the communication between the different devices is an important challenge in such setups. In this paper, we propose an operational rate-constrained beamforming system for optimal rate allocation and strategy selection across frequency. We show an example of the proposed framework, where both the algorithm selection as well as the required rates to transmit the necessary microphone signals are optimized using uniform quantizers, while minimizing the mean-square error (MSE) distortion measure. In contrast to a well-known (theoretically optimal) reference method based on remote source coding for two devices, the presented algorithm is practically implementable and only requires knowledge of joint signal statistics at the FC. Evaluations (based on simulation experiments) show clear improvement over other practically implementable strateuies.},\n  keywords = {acoustic communication (telecommunication);array signal processing;handicapped aids;hearing aids;mean square error methods;medical signal processing;microphones;quantisation (signal);sensor fusion;source coding;statistical analysis;wireless sensor networks;microphone signals;binaural hearing aids;binaural HA;assistive wireless devices;WASN;FC;operational rate-constrained beamforming system;mean-square error distortion minimization;MSE distortion minimization;remote source coding;joint signal statistics;fusion center;wireless acoustic sensor networks;multimicrophone noise reduction;optimal rate allocation;optimal beamforming strategy;Microphones;Array signal processing;Radio spectrum management;Wireless communication;Signal processing algorithms;Wireless sensor networks;Noise reduction;Binaural hearing aids;multi-microphone noise reduction;operational rate-distortion tradeoff},\n  doi = {10.23919/EUSIPCO.2018.8553403},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437704.pdf},\n}\n\n
\n
\n\n\n
\n Modern binaural hearing aids (HAs) can collaborate wirelessly with each other as well as with other assistive (wireless) devices. This enables multi-microphone noise reduction over small wireless acoustic sensor networks (WASNs) to increase the intelligibility under adverse conditions. In this work, we assume one of the HAs to serve as the fusion center (FC). The optimal beamforming strategy for processing the received data at the FC depends on the acoustic scene and physical constraints (e.g., the bit-rate for transmission to the FC), and might be frequency dependent. Selection of the optimal beamforming strategy, while satisfying rate constraints on the communication between the different devices is an important challenge in such setups. In this paper, we propose an operational rate-constrained beamforming system for optimal rate allocation and strategy selection across frequency. We show an example of the proposed framework, where both the algorithm selection as well as the required rates to transmit the necessary microphone signals are optimized using uniform quantizers, while minimizing the mean-square error (MSE) distortion measure. In contrast to a well-known (theoretically optimal) reference method based on remote source coding for two devices, the presented algorithm is practically implementable and only requires knowledge of joint signal statistics at the FC. Evaluations (based on simulation experiments) show clear improvement over other practically implementable strateuies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Lossless Image Coding Method Based on Probability Model Optimization.\n \n \n \n \n\n\n \n Matsuda, I.; Ishikawa, T.; Kameda, Y.; and Itoh, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 151-155, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553404,\n  author = {I. Matsuda and T. Ishikawa and Y. Kameda and S. Itoh},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Lossless Image Coding Method Based on Probability Model Optimization},\n  year = {2018},\n  pages = {151-155},\n  abstract = {This paper proposes a novel lossless image coding method which directly estimates a probability distribution of image intensity values on a pel-by-pel basis. In the estimation process, several examples, i.e. a set of pels whose neighborhoods are similar to a local texture of the target pel to be encoded, are gathered from a search window located on an already encoded part of the same image. Then the probability distribution is modeled as a weighted sum of the Gaussian functions whose center positions are given by the individual examples. Furthermore, model parameters that control shapes of the Gaussian functions are numerically optimized so that the resulting coding rate of the image intensity values can be a minimum. Simulation results indicate that the proposed method provides comparable coding performance to the state-of-the-art lossless coding schemes proposed by other researchers.},\n  keywords = {image coding;optimisation;probability;Gaussian functions;model parameters;image intensity values;lossless image coding method;probability model optimization;probability distribution;estimation process;Encoding;Image coding;Probability distribution;Numerical models;Shape;Europe;Signal processing;Iossless image coding;template matching;probability model;numerical optimization},\n  doi = {10.23919/EUSIPCO.2018.8553404},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438304.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a novel lossless image coding method which directly estimates a probability distribution of image intensity values on a pel-by-pel basis. In the estimation process, several examples, i.e. a set of pels whose neighborhoods are similar to a local texture of the target pel to be encoded, are gathered from a search window located on an already encoded part of the same image. Then the probability distribution is modeled as a weighted sum of the Gaussian functions whose center positions are given by the individual examples. Furthermore, model parameters that control shapes of the Gaussian functions are numerically optimized so that the resulting coding rate of the image intensity values can be a minimum. Simulation results indicate that the proposed method provides comparable coding performance to the state-of-the-art lossless coding schemes proposed by other researchers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Learning-based Acoustic Source Localization in Acoustic Sensor Networks using the Coherent-to-Diffuse Power Ratio.\n \n \n \n \n\n\n \n Brendel, A.; and Kellermann, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1572-1576, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Learning-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553405,\n  author = {A. Brendel and W. Kellermann},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Learning-based Acoustic Source Localization in Acoustic Sensor Networks using the Coherent-to-Diffuse Power Ratio},\n  year = {2018},\n  pages = {1572-1576},\n  abstract = {A distributed learning-based algorithm for the localization of acoustic sources in an acoustic sensor network is proposed. It is based on estimates of the Coherent-to-Diffuse Power Ratio (CDR), which serve as feature for the source-microphone distance, i.e., the range. The relation between the estimated CDR and the range is learned by using Gaussian processes for non-parametric regression. The range estimates obtained from evaluating the regression function are fused by a weighted least squares estimation, which is implemented recursively, allowing for a distributed version of the algorithm. The resulting method is computationally efficient, works in highly reverberant and noisy scenarios and needs only a small amount of data shared over the network. The training phase of the algorithm requires only a few labeled observations. We show the efficacy of the approach with data obtained from image-source simulation.},\n  keywords = {acoustic radiators;acoustic signal processing;Gaussian processes;learning (artificial intelligence);least squares approximations;microphone arrays;microphones;regression analysis;reverberation;telecommunication computing;wireless sensor networks;acoustic source localization;acoustic sensor network;coherent-to-diffuse power ratio;distributed learning-based algorithm;source-microphone distance;nonparametric regression;weighted least squares estimation;distributed version;image-source simulation;CDR;Signal processing algorithms;Sensors;Training;Training data;Acoustics;Signal processing;Estimation;Coherent-to-Diffuse Power Ratio;Gaussian Process Regression;Weighted Least Squares;Distributed Algorithm;Acoustic Sensor Network;Localization},\n  doi = {10.23919/EUSIPCO.2018.8553405},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437975.pdf},\n}\n\n
\n
\n\n\n
\n A distributed learning-based algorithm for the localization of acoustic sources in an acoustic sensor network is proposed. It is based on estimates of the Coherent-to-Diffuse Power Ratio (CDR), which serve as feature for the source-microphone distance, i.e., the range. The relation between the estimated CDR and the range is learned by using Gaussian processes for non-parametric regression. The range estimates obtained from evaluating the regression function are fused by a weighted least squares estimation, which is implemented recursively, allowing for a distributed version of the algorithm. The resulting method is computationally efficient, works in highly reverberant and noisy scenarios and needs only a small amount of data shared over the network. The training phase of the algorithm requires only a few labeled observations. We show the efficacy of the approach with data obtained from image-source simulation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Fast Eigen-Based Signal Combining Algorithm by Using CORDIC.\n \n \n \n \n\n\n \n Wang, L.; Wang, D.; and Hao, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 618-622, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553406,\n  author = {L. Wang and D. Wang and C. Hao},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Fast Eigen-Based Signal Combining Algorithm by Using CORDIC},\n  year = {2018},\n  pages = {618-622},\n  abstract = {For reliable reception of weak signals, eigen-based signal combining algorithms are very effective. However, the algorithms involve a heavy computational burden. In this paper, a fast eigen-based signal combining algorithm is proposed by using the coordinate rotation digital computer (CORDIC) method. CORDIC can use addition and bitshift operations to replace the multiplications in the eigen-based signal combining algorithms. Simulation results indicate that the proposed algorithm can reduce the computational cost while it provides a good combining performance.},\n  keywords = {digital arithmetic;signal processing;CORDIC;weak signals;fast eigen-based signal combining algorithm;rotation digital computer method;coordinate rotation digital computer method;bitshift operations;addition operations;Signal processing algorithms;Correlation;Signal to noise ratio;Computational efficiency;Europe;Estimation},\n  doi = {10.23919/EUSIPCO.2018.8553406},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570429181.pdf},\n}\n\n
\n
\n\n\n
\n For reliable reception of weak signals, eigen-based signal combining algorithms are very effective. However, the algorithms involve a heavy computational burden. In this paper, a fast eigen-based signal combining algorithm is proposed by using the coordinate rotation digital computer (CORDIC) method. CORDIC can use addition and bitshift operations to replace the multiplications in the eigen-based signal combining algorithms. Simulation results indicate that the proposed algorithm can reduce the computational cost while it provides a good combining performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Plenoptic Sensor: Application to Extend Field-of-View.\n \n \n \n \n\n\n \n Vandame, B.; Drazic, V.; Hog, M.; and Sabater, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2205-2209, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PlenopticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553408,\n  author = {B. Vandame and V. Drazic and M. Hog and N. Sabater},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Plenoptic Sensor: Application to Extend Field-of-View},\n  year = {2018},\n  pages = {2205-2209},\n  abstract = {In this paper we study the light field sampling produced by ideal plenoptic sensors, an emerging technology providing new optical capabilities. In particular, we leverage its potential with a new optical design that couples a pyramid lens with an ideal plenoptic sensor. The main advantage is that it extends the field-of-view (FOV) of a main-lens without changing its focal length. To evince the utility of the proposed design we have performed different experiments. First, we demonstrate on simulated synthetic images that our optical design effectively doubles the FOV. Then, we show its feasibility with two different prototypes using plenoptic cameras on the market with very different plenoptic samplings, namely a Raytrix R5 and a Canon 5D MarkIV. Arguably, future cameras with ideal plenoptic sensors will be able to be coupled with pyramid lenses to extend its inherent FOV in a single snapshot.},\n  keywords = {cameras;image resolution;image sampling;image sensors;lenses;optical design techniques;plenoptic cameras;field-of-view;light field;optical capabilities;optical design;plenoptic samplings;plenoptic sensor;Raytrix R5;Canon 5D MarkIV;Cameras;Lenses;Apertures;Photonics;Europe;Signal processing;Optical imaging},\n  doi = {10.23919/EUSIPCO.2018.8553408},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436617.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we study the light field sampling produced by ideal plenoptic sensors, an emerging technology providing new optical capabilities. In particular, we leverage its potential with a new optical design that couples a pyramid lens with an ideal plenoptic sensor. The main advantage is that it extends the field-of-view (FOV) of a main-lens without changing its focal length. To evince the utility of the proposed design we have performed different experiments. First, we demonstrate on simulated synthetic images that our optical design effectively doubles the FOV. Then, we show its feasibility with two different prototypes using plenoptic cameras on the market with very different plenoptic samplings, namely a Raytrix R5 and a Canon 5D MarkIV. Arguably, future cameras with ideal plenoptic sensors will be able to be coupled with pyramid lenses to extend its inherent FOV in a single snapshot.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Virtual camera modeling for multi-view simulation of surveillance scenes.\n \n \n \n \n\n\n \n Bisagno, N.; and Conci, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2170-2174, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"VirtualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553409,\n  author = {N. Bisagno and N. Conci},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Virtual camera modeling for multi-view simulation of surveillance scenes},\n  year = {2018},\n  pages = {2170-2174},\n  abstract = {A recent trend in research is to leverage on advanced simulation frameworks for the implementation and validation of video surveillance and ambient intelligence algorithms. However, in order to guarantee a seamless transferability between the virtual and real worlds, the simulator is required to represent the real-world target scenario in the best way possible. This includes on the one hand the appearance of the scene and the motion of objects, and, on the other hand, it should be accurate with respect to the sensing equipment that will be used in the acquisition phase. This paper focuses on the latter problem related to camera modeling and control, discussing how noise and distortions can be handled, and implementing an engine for camera motion control in terms of pan, tilt, and zoom, with particular attention to the video surveillance scenario.},\n  keywords = {cameras;image motion analysis;object detection;video surveillance;virtual reality;virtual worlds;real-world target scenario;sensing equipment;acquisition phase;camera motion control;video surveillance scenario;virtual camera modeling;multiview simulation;surveillance scenes;advanced simulation frameworks;ambient intelligence algorithms;seamless transferability;Cameras;Distortion;Lenses;Optical distortion;Computational modeling;Apertures;Image resolution;Camera model;PTZ;video surveillance},\n  doi = {10.23919/EUSIPCO.2018.8553409},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439435.pdf},\n}\n\n
\n
\n\n\n
\n A recent trend in research is to leverage on advanced simulation frameworks for the implementation and validation of video surveillance and ambient intelligence algorithms. However, in order to guarantee a seamless transferability between the virtual and real worlds, the simulator is required to represent the real-world target scenario in the best way possible. This includes on the one hand the appearance of the scene and the motion of objects, and, on the other hand, it should be accurate with respect to the sensing equipment that will be used in the acquisition phase. This paper focuses on the latter problem related to camera modeling and control, discussing how noise and distortions can be handled, and implementing an engine for camera motion control in terms of pan, tilt, and zoom, with particular attention to the video surveillance scenario.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Noisy cGMM: Complex Gaussian Mixture Model with Non-Sparse Noise Model for Joint Source Separation and Denoising.\n \n \n \n \n\n\n \n Ito, N.; Schymura, C.; Araki, S.; and Nakatani, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1662-1666, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"NoisyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553410,\n  author = {N. Ito and C. Schymura and S. Araki and T. Nakatani},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Noisy cGMM: Complex Gaussian Mixture Model with Non-Sparse Noise Model for Joint Source Separation and Denoising},\n  year = {2018},\n  pages = {1662-1666},\n  abstract = {Here we introduce a noisy cGMM, a probabilistic model for noisy, mixed signals observed by a microphone array for joint source separation and denoising. In a conventional time-varying complex Gaussian mixture model (cGMM), the observed signals are assumed to be composed of sparse target signals only, where the sparseness refers to the property of having significant power at only a few time-frequency points. However, this assumption becomes inaccurate in the presence of non-sparse signals such as background noise, which renders speech enhancement based on the cGMM less effective. In contrast, the proposed noisy cGMM is based on the assumption that the observed signals consist of not only sparse target signals but also non-sparse background noise. This enables the noisy cGMM to model the observed signals accurately even in the presence of non-sparse background noise, which leads to effective speech enhancement. We also propose a joint diagonalization-based algorithm for estimating the model parameters of the noisy cGMM, which is significantly faster than the standard EM algorithm without any performance degradation. Indeed, the joint diagonalization bypasses the need for matrix inversion, matrix multiplication, and determinant computation at each time-frequency point, which are needed in the EM algorithm. In an experiment, the noisy cGMM outperformed the cGMM in joint source separation and denoising.},\n  keywords = {Gaussian processes;microphone arrays;mixture models;probability;signal denoising;source separation;speech enhancement;time-frequency analysis;nonsparse background noise;noisy cGMM;time-frequency point;nonsparse noise model;probabilistic model;noisy signals;mixed signals;conventional time-varying complex Gaussian mixture model;sparse target signals;nonsparse signals;joint source separation;microphone array;speech enhancement;joint diagonalization-based algorithm;denoising;Noise measurement;Signal processing algorithms;Covariance matrices;Time-frequency analysis;Noise reduction;Source separation;Microphones},\n  doi = {10.23919/EUSIPCO.2018.8553410},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437891.pdf},\n}\n\n
\n
\n\n\n
\n Here we introduce a noisy cGMM, a probabilistic model for noisy, mixed signals observed by a microphone array for joint source separation and denoising. In a conventional time-varying complex Gaussian mixture model (cGMM), the observed signals are assumed to be composed of sparse target signals only, where the sparseness refers to the property of having significant power at only a few time-frequency points. However, this assumption becomes inaccurate in the presence of non-sparse signals such as background noise, which renders speech enhancement based on the cGMM less effective. In contrast, the proposed noisy cGMM is based on the assumption that the observed signals consist of not only sparse target signals but also non-sparse background noise. This enables the noisy cGMM to model the observed signals accurately even in the presence of non-sparse background noise, which leads to effective speech enhancement. We also propose a joint diagonalization-based algorithm for estimating the model parameters of the noisy cGMM, which is significantly faster than the standard EM algorithm without any performance degradation. Indeed, the joint diagonalization bypasses the need for matrix inversion, matrix multiplication, and determinant computation at each time-frequency point, which are needed in the EM algorithm. In an experiment, the noisy cGMM outperformed the cGMM in joint source separation and denoising.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Faster FISTA.\n \n \n \n \n\n\n \n Liang, J.; and Schönlieb, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1-9, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FasterPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553411,\n  author = {J. Liang and C. Schönlieb},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Faster FISTA},\n  year = {2018},\n  pages = {1-9},\n  abstract = {The “fast iterative shrinkage-thresholding algorithm”, a.k.a. FISTA, is one of the most widely used algorithms in the literature. However, despite its optimal theoretical O(1/k2) convergence rate guarantee, oftentimes in practice its performance is not as desired owing to the (local) oscillatory behaviour. Over the years, various approaches are proposed to overcome this drawback of FISTA, in this paper, we propose a simple yet effective modification to the algorithm which has two advantages: 1) it enables us to prove the convergence of the generated sequence; 2) it shows superior practical performance compared to the original FISTA. Numerical experiments are presented to illustrate the superior performance of the proposed algorithm.},\n  keywords = {iterative methods;optimal theoretical O;widely used algorithms;fast iterative shrinkage-thresholding algorithm;faster FISTA;original FISTA;superior practical performance;simple yet effective modification;oscillatory behaviour;desired owing;practice its performance;2;1/k;Convergence;Europe;Signal processing;Inverse problems;Radio frequency;Linear programming;Signal processing algorithms;FISTA;Forward-Backward splitting;Inertial schemes;Convergence rates;Acceleration},\n  doi = {10.23919/EUSIPCO.2018.8553411},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433958.pdf},\n}\n\n
\n
\n\n\n
\n The “fast iterative shrinkage-thresholding algorithm”, a.k.a. FISTA, is one of the most widely used algorithms in the literature. However, despite its optimal theoretical O(1/k2) convergence rate guarantee, oftentimes in practice its performance is not as desired owing to the (local) oscillatory behaviour. Over the years, various approaches are proposed to overcome this drawback of FISTA, in this paper, we propose a simple yet effective modification to the algorithm which has two advantages: 1) it enables us to prove the convergence of the generated sequence; 2) it shows superior practical performance compared to the original FISTA. Numerical experiments are presented to illustrate the superior performance of the proposed algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Decision Level Fusion: An Event Driven Approach.\n \n \n \n \n\n\n \n Roheda, S.; Krim, H.; Luo, Z.; and Wu, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2598-2602, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DecisionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553412,\n  author = {S. Roheda and H. Krim and Z. Luo and T. Wu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Decision Level Fusion: An Event Driven Approach},\n  year = {2018},\n  pages = {2598-2602},\n  abstract = {This paper presents a technique that combines the occurrence of certain events, as observed by different sensors, in order to detect and classify objects. This technique explores the extent of dependence between features being observed by the sensors, and generates more informed probability distributions over the events. Provided some additional information about the features of the object, this fusion technique can outperform other existing decision level fusion approaches that may not take into account the relationship between different features.},\n  keywords = {probability;sensor fusion;decision level fusion approaches;event driven approach;probability distributions;sensors;Sensor Fusion;Decision Level Fusion;Event based Classification;Coupling},\n  doi = {10.23919/EUSIPCO.2018.8553412},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437553.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a technique that combines the occurrence of certain events, as observed by different sensors, in order to detect and classify objects. This technique explores the extent of dependence between features being observed by the sensors, and generates more informed probability distributions over the events. Provided some additional information about the features of the object, this fusion technique can outperform other existing decision level fusion approaches that may not take into account the relationship between different features.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving Portfolios Global Performance with Robust Covariance Matrix Estimation: Application to the Maximum Variety Portfolio.\n \n \n \n \n\n\n \n Jay, E.; Terreaux, E.; Ovarlez, J.; and Pascal, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1107-1111, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553414,\n  author = {E. Jay and E. Terreaux and J. Ovarlez and F. Pascal},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving Portfolios Global Performance with Robust Covariance Matrix Estimation: Application to the Maximum Variety Portfolio},\n  year = {2018},\n  pages = {1107-1111},\n  abstract = {This paper presents how the most recent improvements made on covariance matrix estimation and model order selection can be applied to the portfolio optimisation problem. The particular case of the Maximum Variety Portfolio is treated but the same improvements apply also in the other optimisation problems such as the Minimum Variance Portfolio. We assume that the most important information (or the latent factors) are embedded in correlated Elliptical Symmetric noise extending classical Gaussian assumptions. We propose here to focus on a recent method of model order selection allowing to efficiently estimate the subspace of main factors describing the market. This non-standard model order selection problem is solved through Random Matrix Theory and robust covariance matrix estimation. The proposed procedure will be explained through synthetic data and be applied and compared with standard techniques on real market data showing promising improvements.},\n  keywords = {covariance matrices;estimation theory;investment;optimisation;nonstandard model order selection problem;robust covariance matrix estimation;maximum variety portfolio;portfolio optimisation problem;random matrix theory;minimum variance portfolio;portfolios global performance;correlated elliptical symmetric noise;classical Gaussian assumptions;Covariance matrices;Portfolios;Estimation;Resource management;Eigenvalues and eigenfunctions;Europe;Optimization;Robust Covariance Matrix Estimation;Model Order Selection;Random Matrix Theory;Portfolio Optimisation;Financial Time Series;Multi-Factor Model;Elliptical Symmetric Noise;Maximum Variety Portfolio},\n  doi = {10.23919/EUSIPCO.2018.8553414},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437855.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents how the most recent improvements made on covariance matrix estimation and model order selection can be applied to the portfolio optimisation problem. The particular case of the Maximum Variety Portfolio is treated but the same improvements apply also in the other optimisation problems such as the Minimum Variance Portfolio. We assume that the most important information (or the latent factors) are embedded in correlated Elliptical Symmetric noise extending classical Gaussian assumptions. We propose here to focus on a recent method of model order selection allowing to efficiently estimate the subspace of main factors describing the market. This non-standard model order selection problem is solved through Random Matrix Theory and robust covariance matrix estimation. The proposed procedure will be explained through synthetic data and be applied and compared with standard techniques on real market data showing promising improvements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Identification of Multichannel AR Models with Additive Noise: a Frisch Scheme Approach.\n \n \n \n \n\n\n \n Diversi, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1252-1256, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IdentificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553415,\n  author = {R. Diversi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Identification of Multichannel AR Models with Additive Noise: a Frisch Scheme Approach},\n  year = {2018},\n  pages = {1252-1256},\n  abstract = {A new approach for estimating multichannel AR (M-AR) models from noisy observations is proposed. It relies on the so-called Frisch scheme, whose rationale consists in finding the solution of the identification problem within a locus of solutions compatible with the second order statistics of the noisy data. Once that the locus of solutions has been defined, it is necessary to introduce a suitable selection criterion in order to identify a single solution. The criterion proposed in the paper is based on the comparison of the theoretical statistical properties of the residual of the noisy M-AR model with those computed from the data. The results obtained by means of Monte Carlo simulations show that the proposed algorithm outperforms some existing methods.},\n  keywords = {identification;least squares approximations;Monte Carlo methods;multichannel AR models;additive noise;Frisch scheme approach;noisy observations;identification problem;order statistics;noisy data;single solution;theoretical statistical properties;noisy M-AR model;selection criterion;Mathematical model;Noise measurement;Additive noise;Covariance matrices;Biological system modeling;Computational modeling;Matrix decomposition},\n  doi = {10.23919/EUSIPCO.2018.8553415},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437219.pdf},\n}\n\n
\n
\n\n\n
\n A new approach for estimating multichannel AR (M-AR) models from noisy observations is proposed. It relies on the so-called Frisch scheme, whose rationale consists in finding the solution of the identification problem within a locus of solutions compatible with the second order statistics of the noisy data. Once that the locus of solutions has been defined, it is necessary to introduce a suitable selection criterion in order to identify a single solution. The criterion proposed in the paper is based on the comparison of the theoretical statistical properties of the residual of the noisy M-AR model with those computed from the data. The results obtained by means of Monte Carlo simulations show that the proposed algorithm outperforms some existing methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Centerline articulatory models of the velum and epiglottis for articulatory synthesis of speech.\n \n \n \n \n\n\n \n Laprie, Y.; Elie, B.; Tsukanova, A.; and Vuissoz, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2110-2114, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CenterlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553416,\n  author = {Y. Laprie and B. Elie and A. Tsukanova and P. Vuissoz},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Centerline articulatory models of the velum and epiglottis for articulatory synthesis of speech},\n  year = {2018},\n  pages = {2110-2114},\n  abstract = {This work concerns the construction of articulatory models for synthesis of speech, and more specifically the velum and epiglottis. The direct application of principal component analysis to the contours of these articulators extracted from MRI images results in unrealistic factors due to delineation errors. The approach described in this paper relies on the application of PCA to the centerline of the articulator and a simple reconstruction algorithm to obtain the global articulator contour. The complete articulatory model was constructed from static Magnetic Resonance (MR) images because their quality is much better than that of dynamic MR images. We thus assessed the extent to which the model constructed from static images is capable of approaching the vocal tract shape in MR images recorded at 55 Hz for continuous speech. The analysis of reconstruction errors shows that it is necessary to add dynamic images to the database of static images, in particular to approach the tongue shape for the /I/sound.},\n  keywords = {biomedical MRI;image reconstruction;medical image processing;principal component analysis;speech;speech synthesis;articulatory synthesis;centerline articulatory models;dynamic images;reconstruction errors;continuous speech;static images;dynamic MR images;static Magnetic Resonance images;complete articulatory model;global articulator contour;simple reconstruction algorithm;principal component analysis;epiglottis;velum;frequency 55.0 Hz;Principal component analysis;Tongue;Shape;Magnetic resonance imaging;Larynx;Extremities;Image edge detection;Speech;Articulatory models;MRI;Deformable objects},\n  doi = {10.23919/EUSIPCO.2018.8553416},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438298.pdf},\n}\n\n
\n
\n\n\n
\n This work concerns the construction of articulatory models for synthesis of speech, and more specifically the velum and epiglottis. The direct application of principal component analysis to the contours of these articulators extracted from MRI images results in unrealistic factors due to delineation errors. The approach described in this paper relies on the application of PCA to the centerline of the articulator and a simple reconstruction algorithm to obtain the global articulator contour. The complete articulatory model was constructed from static Magnetic Resonance (MR) images because their quality is much better than that of dynamic MR images. We thus assessed the extent to which the model constructed from static images is capable of approaching the vocal tract shape in MR images recorded at 55 Hz for continuous speech. The analysis of reconstruction errors shows that it is necessary to add dynamic images to the database of static images, in particular to approach the tongue shape for the /I/sound.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Low Mutual and Average Coherence Dictionary Learning.\n \n \n \n \n\n\n \n Parsa, J.; Sadeghi, M.; Babaie-Zadeh, M.; and Jutten, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1725-1729, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553417,\n  author = {J. Parsa and M. Sadeghi and M. Babaie-Zadeh and C. Jutten},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Low Mutual and Average Coherence Dictionary Learning},\n  year = {2018},\n  pages = {1725-1729},\n  abstract = {Dictionary learning (DL) has found many applications in sparse approximation problems. Two important properties of a dictionary are maximum and average coherence (cross- correlation) between its atoms. Many algorithms have been presented to take into account the coherence between atoms during dictionary learning. Some of them mainly reduce the maximum (mutual) coherence whereas some other algorithms decrease the average coherence. In this paper, we propose a method to jointly reduce the maximum and average correlations between different atoms. This is done by making a balance between reducing the maximum and average co- herences. Experimental results demonstrate that the proposed approach reduce the mutual and average coherence of the dictionary better than existing algorithms.},\n  keywords = {approximation theory;signal processing;sparse approximation problems;maximum coherence;average correlations;mutual coherence;joint low mutual-average coherence dictionary learning;Coherence;Signal processing algorithms;Dictionaries;Machine learning;Approximation algorithms;Matching pursuit algorithms;Signal to noise ratio;Compressed sensing;sparse coding;mutual coherence;average coherence;dictionary learning},\n  doi = {10.23919/EUSIPCO.2018.8553417},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437120.pdf},\n}\n\n
\n
\n\n\n
\n Dictionary learning (DL) has found many applications in sparse approximation problems. Two important properties of a dictionary are maximum and average coherence (cross- correlation) between its atoms. Many algorithms have been presented to take into account the coherence between atoms during dictionary learning. Some of them mainly reduce the maximum (mutual) coherence whereas some other algorithms decrease the average coherence. In this paper, we propose a method to jointly reduce the maximum and average correlations between different atoms. This is done by making a balance between reducing the maximum and average co- herences. Experimental results demonstrate that the proposed approach reduce the mutual and average coherence of the dictionary better than existing algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the Automatic Validation of Speech Alignment.\n \n \n \n \n\n\n \n Athanasopoulos, G.; and Macq, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2105-2109, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553418,\n  author = {G. Athanasopoulos and B. Macq},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On the Automatic Validation of Speech Alignment},\n  year = {2018},\n  pages = {2105-2109},\n  abstract = {The alignment of two utterances is the basis of many speech processing applications. The acoustic user interface of such applications should be capable of detecting insufficient alignment results and identifying the responsible input utterances. In this paper, we discuss the automatic validation of speech alignment and propose two new validation algorithms. The first method relies on locating and matching the syllable nuclei of the aligned utterances. The second method performs syllable-level comparison of the speech signal envelopes in accordance to the alignment time-warping path. Experimental results show that the proposed algorithms perform consistently well and can be effectively applied for the validation of different speech alignment methods.},\n  keywords = {speech recognition;speech alignment methods;responsible input utterances;insufficient alignment results;acoustic user interface;speech processing applications;automatic validation;alignment time-warping path;speech signal envelopes;syllable-level comparison;aligned utterances;Hidden Markov models;Speech processing;Acoustics;Phonetics;Training;Europe;Timing;speech alignment;HMM-based forced alignment;dynamic time warping;alignment assessment},\n  doi = {10.23919/EUSIPCO.2018.8553418},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438068.pdf},\n}\n\n
\n
\n\n\n
\n The alignment of two utterances is the basis of many speech processing applications. The acoustic user interface of such applications should be capable of detecting insufficient alignment results and identifying the responsible input utterances. In this paper, we discuss the automatic validation of speech alignment and propose two new validation algorithms. The first method relies on locating and matching the syllable nuclei of the aligned utterances. The second method performs syllable-level comparison of the speech signal envelopes in accordance to the alignment time-warping path. Experimental results show that the proposed algorithms perform consistently well and can be effectively applied for the validation of different speech alignment methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Anomalous Sound Event Detection Based on WaveNet.\n \n \n \n \n\n\n \n Hayashi, T.; Komatsu, T.; Kondo, R.; Toda, T.; and Takeda, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2494-2498, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnomalousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553423,\n  author = {T. Hayashi and T. Komatsu and R. Kondo and T. Toda and K. Takeda},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Anomalous Sound Event Detection Based on WaveNet},\n  year = {2018},\n  pages = {2494-2498},\n  abstract = {This paper proposes a new method of anomalous sound event detection for use in public spaces. The proposed method utilizes WaveNet, a generative model based on a convolutional neural network, to model in the time domain the various acoustic patterns which occur in public spaces. When the model detects unknown acoustic patterns, they are identified as anomalous sound events. WaveNet has been used to precisely model a waveform signal and to directly generate it using random sampling in generation tasks, such as speech synthesis. On the other hand, our proposed method uses WaveNet as a predictor rather than a generator to detect waveform segments causing large prediction errors as unknown acoustic patterns. Because WaveNet is capable of modeling detailed temporal structures, such as phase information, of the waveform signals, the proposed method is expected to detect anomalous sound events more accurately than conventional methods based on reconstruction errors of acoustic features. To evaluate the performance of the proposed method, we conduct an experimental evaluation using a real-world dataset recorded in a subway station. We compare the proposed method with the conventional feature-based methods such as an auto-encoder and a long short-term memory network. Experimental results demonstrate that the proposed method outperforms the conventional methods and that the prediction errors of WaveNet can be effectively used as a good metric for unsupervised anomalous detection.},\n  keywords = {acoustic signal processing;feature extraction;learning (artificial intelligence);neural nets;pattern clustering;signal sampling;speech synthesis;public spaces;WaveNet;generative model;convolutional neural network;anomalous sound events;waveform signal;generation tasks;prediction errors;acoustic features;unsupervised anomalous detection;anomalous sound event detection;unknown acoustic patterns;random sampling;Acoustics;Entropy;Feature extraction;Convolution;Event detection;Time-domain analysis;Task analysis;anomaly detection;anomalous sound event detection;WaveNet;neural network},\n  doi = {10.23919/EUSIPCO.2018.8553423},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437578.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes a new method of anomalous sound event detection for use in public spaces. The proposed method utilizes WaveNet, a generative model based on a convolutional neural network, to model in the time domain the various acoustic patterns which occur in public spaces. When the model detects unknown acoustic patterns, they are identified as anomalous sound events. WaveNet has been used to precisely model a waveform signal and to directly generate it using random sampling in generation tasks, such as speech synthesis. On the other hand, our proposed method uses WaveNet as a predictor rather than a generator to detect waveform segments causing large prediction errors as unknown acoustic patterns. Because WaveNet is capable of modeling detailed temporal structures, such as phase information, of the waveform signals, the proposed method is expected to detect anomalous sound events more accurately than conventional methods based on reconstruction errors of acoustic features. To evaluate the performance of the proposed method, we conduct an experimental evaluation using a real-world dataset recorded in a subway station. We compare the proposed method with the conventional feature-based methods such as an auto-encoder and a long short-term memory network. Experimental results demonstrate that the proposed method outperforms the conventional methods and that the prediction errors of WaveNet can be effectively used as a good metric for unsupervised anomalous detection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Rendering Solution to Display Light Field in Virtual Reality.\n \n \n \n \n\n\n \n Upenik, E.; Viola, I.; and Ebrahimi, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 246-250, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553424,\n  author = {E. Upenik and I. Viola and T. Ebrahimi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Rendering Solution to Display Light Field in Virtual Reality},\n  year = {2018},\n  pages = {246-250},\n  abstract = {There is a need for affordable and easily deployable rendering and display solutions to take full advantage of light field imaging. In particular, the research community needs ways to better assess the impact of various light field imaging technologies based on different criteria. In this paper, we propose a solution to render light field images on head mounted virtual reality displays using off-the-shelf components. The proposed framework is based on WebGL and supports rendering of narrow baseline light field images with interactions that result in changes of perspective. The system can be used in subjective quality assessments of light field via crowd-sourcing. It also allows for tracking and recording of viewing interactions.},\n  keywords = {helmet mounted displays;rendering (computer graphics);three-dimensional displays;virtual reality;narrow baseline light field images;virtual reality displays;light field imaging;display solutions;easily deployable rendering;affordable rendering;display light field;rendering solution;Bit rate;Rendering (computer graphics);Software;Image coding;Signal processing;Quality assessment;Resists},\n  doi = {10.23919/EUSIPCO.2018.8553424},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437659.pdf},\n}\n\n
\n
\n\n\n
\n There is a need for affordable and easily deployable rendering and display solutions to take full advantage of light field imaging. In particular, the research community needs ways to better assess the impact of various light field imaging technologies based on different criteria. In this paper, we propose a solution to render light field images on head mounted virtual reality displays using off-the-shelf components. The proposed framework is based on WebGL and supports rendering of narrow baseline light field images with interactions that result in changes of perspective. The system can be used in subjective quality assessments of light field via crowd-sourcing. It also allows for tracking and recording of viewing interactions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Inferring User Gender from User Generated Visual Content on a Deep Semantic Space.\n \n \n \n \n\n\n \n Semedo, D.; Magalhães, J.; and Martins, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1800-1804, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"InferringPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553425,\n  author = {D. Semedo and J. Magalhães and F. Martins},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Inferring User Gender from User Generated Visual Content on a Deep Semantic Space},\n  year = {2018},\n  pages = {1800-1804},\n  abstract = {In this paper we address the task of gender classification on picture sharing social media networks such as Instagram and Flickr. We aim to infer the gender of an user given only a small set of the images shared in its profile. We make the assumption that user's images contain a collection of visual elements that implicitly encode discriminative patterns that allow inferring its gender, in a language independent way. This information can then be used in personalisation and recommendation. Our main hypothesis is that semantic visual features are more adequate for discriminating high-level classes. The gender detection task is formalised as: given an user's profile, represented as a bag of images, we want to infer the gender of the user. Social media profiles can be noisy and contain confounding factors, therefore we classify bags of user-profile`s images to provide a more robust prediction. Experiments using a dataset from the picture sharing social network Instagram show that the use of multiple images is key to improve detection performance. Moreover, we verify that deep semantic features are more suited for gender detection than low-level image representations. The methods proposed can infer the gender with precision scores higher than 0.825, and the best performing method achieving 0.911 precision.},\n  keywords = {behavioural sciences computing;feature extraction;image classification;image representation;social networking (online);low-level image representations;deep semantic features;user-profile;social media profiles;gender detection task;semantic visual features;encode discriminative patterns;visual elements;social media networks;gender classification;deep semantic space;user generated visual content;inferring user gender;social network;Instagram;Feature extraction;Semantics;Task analysis;Visualization;Social network services;Support vector machines;Europe;Gender detection;image classification;feature spaces;social media},\n  doi = {10.23919/EUSIPCO.2018.8553425},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439431.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we address the task of gender classification on picture sharing social media networks such as Instagram and Flickr. We aim to infer the gender of an user given only a small set of the images shared in its profile. We make the assumption that user's images contain a collection of visual elements that implicitly encode discriminative patterns that allow inferring its gender, in a language independent way. This information can then be used in personalisation and recommendation. Our main hypothesis is that semantic visual features are more adequate for discriminating high-level classes. The gender detection task is formalised as: given an user's profile, represented as a bag of images, we want to infer the gender of the user. Social media profiles can be noisy and contain confounding factors, therefore we classify bags of user-profile`s images to provide a more robust prediction. Experiments using a dataset from the picture sharing social network Instagram show that the use of multiple images is key to improve detection performance. Moreover, we verify that deep semantic features are more suited for gender detection than low-level image representations. The methods proposed can infer the gender with precision scores higher than 0.825, and the best performing method achieving 0.911 precision.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Incoherent Projection Matrix Design for Compressed Sensing Using Alternating Optimization.\n \n \n \n \n\n\n \n Meenakshi; and Srirangarajan, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1770-1774, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IncoherentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553426,\n  author = {{Meenakshi} and S. Srirangarajan},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Incoherent Projection Matrix Design for Compressed Sensing Using Alternating Optimization},\n  year = {2018},\n  pages = {1770-1774},\n  abstract = {In this paper we address the design of projection matrix for compressed sensing. In most compressed sensing applications, random projection matrices have been used but it has been shown that optimizing these projections can greatly improve the sparse signal reconstruction performance. An incoherent projection matrix can greatly reduce the recovery error for sparse signal reconstruction. With this motivation, we propose an algorithm for the construction of an incoherent projection matrix with respect to the designed equiangular tight frame (ETF) for reducing pairwise mutual coherence. The designed frame consists of a set of column vectors in a finite dimensional Hilbert space with the desired norm and reduced pairwise mutual coherence. The proposed method is based on updating ETF with inertial force and constructing incoherent frame and projection matrix using alternating minimization. We compare the performance of the proposed algorithm with state-of-the-art projection matrix design algorithms via numerical experiments and the results show that the proposed algorithm outperforms the other algorithms.},\n  keywords = {compressed sensing;Hilbert spaces;matrix algebra;signal reconstruction;equiangular tight frame;projection matrix design algorithms;inertial force;pairwise mutual coherence;sparse signal reconstruction performance;random projection matrices;compressed sensing applications;alternating optimization;Coherence;Signal processing algorithms;Sparse matrices;Force;Sensors;Matching pursuit algorithms;Europe;Compressed sensing;projection matrix;mutual coherence;equiangular tight frame},\n  doi = {10.23919/EUSIPCO.2018.8553426},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437821.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we address the design of projection matrix for compressed sensing. In most compressed sensing applications, random projection matrices have been used but it has been shown that optimizing these projections can greatly improve the sparse signal reconstruction performance. An incoherent projection matrix can greatly reduce the recovery error for sparse signal reconstruction. With this motivation, we propose an algorithm for the construction of an incoherent projection matrix with respect to the designed equiangular tight frame (ETF) for reducing pairwise mutual coherence. The designed frame consists of a set of column vectors in a finite dimensional Hilbert space with the desired norm and reduced pairwise mutual coherence. The proposed method is based on updating ETF with inertial force and constructing incoherent frame and projection matrix using alternating minimization. We compare the performance of the proposed algorithm with state-of-the-art projection matrix design algorithms via numerical experiments and the results show that the proposed algorithm outperforms the other algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Complete Model Selection in Multiset Canonical Correlation Analysis.\n \n \n \n \n\n\n \n Marrinan, T.; Hasija, T.; Lameiro, C.; and Schreier, P. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1082-1086, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CompletePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553427,\n  author = {T. Marrinan and T. Hasija and C. Lameiro and P. J. Schreier},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Complete Model Selection in Multiset Canonical Correlation Analysis},\n  year = {2018},\n  pages = {1082-1086},\n  abstract = {Traditional model-order selection for canonical correlation analysis infers latent correlations between two sets of noisy data. In this scenario it is enough to count the number of correlated signals, and thus the model order is a scalar. When the problem is generalized to a collection of three or more data sets, signals can demonstrate correlation between all sets or some subset, and one number cannot completely describe the correlation structure. We present a method for estimating multiset correlation structure that combines source extraction in the style of joint blind source separation with pairwise model order selection. The result is a general technique that describes the complete correlation structure of the collection.},\n  keywords = {blind source separation;correlation methods;estimation theory;signal denoising;complete model selection;joint blind source separation;pairwise model order selection;complete correlation structure;multiset canonical correlation analysis;model-order selection;multiset correlation structure estimation;Correlation;Coherence;Covariance matrices;Data models;Blind source separation;Europe;canonical correlation;hypothesis testing;joint blind source separation;MCCA;order selection},\n  doi = {10.23919/EUSIPCO.2018.8553427},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436072.pdf},\n}\n\n
\n
\n\n\n
\n Traditional model-order selection for canonical correlation analysis infers latent correlations between two sets of noisy data. In this scenario it is enough to count the number of correlated signals, and thus the model order is a scalar. When the problem is generalized to a collection of three or more data sets, signals can demonstrate correlation between all sets or some subset, and one number cannot completely describe the correlation structure. We present a method for estimating multiset correlation structure that combines source extraction in the style of joint blind source separation with pairwise model order selection. The result is a general technique that describes the complete correlation structure of the collection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Effect of Random Sampling on Noisy Nonsparse Signals in Time-Frequency Analysis.\n \n \n \n \n\n\n \n Stanković, I.; Brajović, M.; Daković, M.; and Ioana, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 480-483, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EffectPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553428,\n  author = {I. Stanković and M. Brajović and M. Daković and C. Ioana},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Effect of Random Sampling on Noisy Nonsparse Signals in Time-Frequency Analysis},\n  year = {2018},\n  pages = {480-483},\n  abstract = {The paper examines the exact error of randomly sampled reconstructed nonsparse signals having a sparsity constraint. When signal is randomly sampled, it looses the property of sparsity. It is considered that the signal is reconstructed as sparse in the joint time-frequency domain. Under this assumption, the signal can be reconstructed by a reduced set of measurements. It is shown that the error can be calculated from the unavailable samples and assumed sparsity. Unavailable samples degrade the sparsity constraint. The error is examined on nonstationary signals, with the short-time Fourier transform acting as a representative domain of signal sparsity. The presented theory is verified on numerical examples.},\n  keywords = {Fourier transforms;signal processing;time-frequency analysis;random sampling;noisy nonsparse signals;time-frequency analysis;randomly sampled reconstructed nonsparse signals;sparsity constraint;joint time-frequency domain;unavailable samples;assumed sparsity;nonstationary signals;short-time Fourier;signal sparsity;Noise measurement;Time-frequency analysis;Europe;Signal processing;Discrete Fourier transforms;compressive sensing;nonsparse signals;random sampling;time-frequency analysis},\n  doi = {10.23919/EUSIPCO.2018.8553428},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439408.pdf},\n}\n\n
\n
\n\n\n
\n The paper examines the exact error of randomly sampled reconstructed nonsparse signals having a sparsity constraint. When signal is randomly sampled, it looses the property of sparsity. It is considered that the signal is reconstructed as sparse in the joint time-frequency domain. Under this assumption, the signal can be reconstructed by a reduced set of measurements. It is shown that the error can be calculated from the unavailable samples and assumed sparsity. Unavailable samples degrade the sparsity constraint. The error is examined on nonstationary signals, with the short-time Fourier transform acting as a representative domain of signal sparsity. The presented theory is verified on numerical examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recovery of Linearly Mixed Sparse Sources from Multiple Measurement Vectors Using L1-Minimization.\n \n \n \n \n\n\n \n Hamed Fouladi, S.; and Balasingham, I.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 563-567, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RecoveryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553429,\n  author = {S. {Hamed Fouladi} and I. Balasingham},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Recovery of Linearly Mixed Sparse Sources from Multiple Measurement Vectors Using L1-Minimization},\n  year = {2018},\n  pages = {563-567},\n  abstract = {Multiple measurement vector (MMV) enables joint sparse recovery which can be applied in wide range of applications. Traditional MMV algorithms assume that the solution has independent columns or correlation among the columns. This assumption is not accurate for applications like signal estimation in photoplethysmography (PPG). In this paper, we consider a structure for the solution matrix decomposed into a sparse matrix with independent columns and a square mixing matrix. Based on this structure, we find the uniqueness condition for l1minimization. Moreover, an algorithm is proposed that provides a new cost function based on the new structure. It is shown that the new structure increases the recovery performance especially in low number of measurements.},\n  keywords = {blind source separation;compressed sensing;least squares approximations;sparse matrices;vectors;signal estimation;solution matrix;sparse matrix;independent columns;square mixing matrix;recovery performance;linearly mixed sparse sources;multiple measurement vector;L1-Minimization;joint sparse recovery;traditional MMV algorithms;photoplethysmography;Sparse matrices;Signal processing algorithms;Covariance matrices;Minimization;Matrix decomposition;Cost function;Biomedical measurement},\n  doi = {10.23919/EUSIPCO.2018.8553429},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435636.pdf},\n}\n\n
\n
\n\n\n
\n Multiple measurement vector (MMV) enables joint sparse recovery which can be applied in wide range of applications. Traditional MMV algorithms assume that the solution has independent columns or correlation among the columns. This assumption is not accurate for applications like signal estimation in photoplethysmography (PPG). In this paper, we consider a structure for the solution matrix decomposed into a sparse matrix with independent columns and a square mixing matrix. Based on this structure, we find the uniqueness condition for l1minimization. Moreover, an algorithm is proposed that provides a new cost function based on the new structure. It is shown that the new structure increases the recovery performance especially in low number of measurements.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Face demorphing in the presence of facial appearance variations.\n \n \n \n \n\n\n \n Ferrara, M.; Franco, A.; and Maltoni, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2365-2369, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FacePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553430,\n  author = {M. Ferrara and A. Franco and D. Maltoni},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Face demorphing in the presence of facial appearance variations},\n  year = {2018},\n  pages = {2365-2369},\n  abstract = {This study focuses on the robustness of face demorphing as a technique to protect face recognition systems against the well-known morphing threat. In particular, we check if in presence of face variations of different type and strength, the demorphing process significantly reduces the Genuine Acceptance Rate leading to an excessive number of false morphing warnings. Experimental results show that, except for extreme conditions that are unlikely in e-gates scenario, demorphing does not markedly affect face recognition accuracy.},\n  keywords = {face recognition;image morphing;false morphing warnings;face recognition accuracy;face demorphing;facial appearance variations;genuine acceptance rate;demorphing process;face variations;morphing threat;face recognition systems;Automated border control;eMRTD;face demorphing;face morphing attack;face recognition;face variations},\n  doi = {10.23919/EUSIPCO.2018.8553430},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437840.pdf},\n}\n\n
\n
\n\n\n
\n This study focuses on the robustness of face demorphing as a technique to protect face recognition systems against the well-known morphing threat. In particular, we check if in presence of face variations of different type and strength, the demorphing process significantly reduces the Genuine Acceptance Rate leading to an excessive number of false morphing warnings. Experimental results show that, except for extreme conditions that are unlikely in e-gates scenario, demorphing does not markedly affect face recognition accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Species Related Gas Tracking in Distribution Grids.\n \n \n \n \n\n\n \n Alexiou, A.; and Schenk, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 321-325, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpeciesPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553433,\n  author = {A. Alexiou and J. Schenk},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Species Related Gas Tracking in Distribution Grids},\n  year = {2018},\n  pages = {321-325},\n  abstract = {Due to a wider diversification of gas sources, today tracking gas in distribution grids is of great interest for gas grid operators to provide fair invoicing of gas customers. Substitute natural gas (SNG), e.g. derived from raw biogas, injected concurrently into natural gas grids may differ in its calorific value Hs compared to fossil natural gas in the grid. This is manifesting in deviating chemical compositions of injected grid gases. Remarkably, the chemical fractions of SNGs fluctuate significantly over time exhibiting time-dependent signatures. Sampling over relevant features of injected gases, e.g. the chemical species concentrations at standard temperature and pressure, by means of calibrated sensors, provides time-dependent signals which can be taken for gas tracking purposes. To that end, we present an accurate technique to estimate the transit times of gas between nodes, e.g. from an entry to an exit point. As a result, calorific value extrapolation from one gas grid node to a downstream node, with an accuracy sufficient for gas customer invoicing, is feasible. In an experimental section we show a normalized root-mean-square deviation (NRMSD) with respect to calorific value estimation.},\n  keywords = {biofuel;calibration;fuel gasification;invoicing;mean square error methods;natural gas technology;distribution grids;natural gas grids;calorific value extrapolation;gas customer invoicing;calorific value estimation;biogas;substitute natural gas;chemical fractions;normalized root-mean-square deviation;Natural gas;Chemicals;Gas detectors;Indexes;Viterbi algorithm;Time series analysis;Viterbi algorithm;dynamic time warping;gas tracking;calorific value tracking;distribution grids},\n  doi = {10.23919/EUSIPCO.2018.8553433},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437545.pdf},\n}\n\n
\n
\n\n\n
\n Due to a wider diversification of gas sources, today tracking gas in distribution grids is of great interest for gas grid operators to provide fair invoicing of gas customers. Substitute natural gas (SNG), e.g. derived from raw biogas, injected concurrently into natural gas grids may differ in its calorific value Hs compared to fossil natural gas in the grid. This is manifesting in deviating chemical compositions of injected grid gases. Remarkably, the chemical fractions of SNGs fluctuate significantly over time exhibiting time-dependent signatures. Sampling over relevant features of injected gases, e.g. the chemical species concentrations at standard temperature and pressure, by means of calibrated sensors, provides time-dependent signals which can be taken for gas tracking purposes. To that end, we present an accurate technique to estimate the transit times of gas between nodes, e.g. from an entry to an exit point. As a result, calorific value extrapolation from one gas grid node to a downstream node, with an accuracy sufficient for gas customer invoicing, is feasible. In an experimental section we show a normalized root-mean-square deviation (NRMSD) with respect to calorific value estimation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binary Sequences Set with Small ISL for MIMO Radar Systems.\n \n \n \n \n\n\n \n Alaee-Kerahroodi, M.; Modarres-Hashemi, M.; Naghsh, M. M.; Shankar, B.; and Ottersten, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2395-2399, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BinaryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553434,\n  author = {M. Alaee-Kerahroodi and M. Modarres-Hashemi and M. M. Naghsh and B. Shankar and B. Ottersten},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Binary Sequences Set with Small ISL for MIMO Radar Systems},\n  year = {2018},\n  pages = {2395-2399},\n  abstract = {In this paper, we aim at designing a set of binary sequences with good aperiodic auto- and crosscorrelation properties for Multiple-Input-Multiple-Output (MIMO) radar systems. We show such a set of sequences can be obtained by minimizing the Integrated Side Lobe (ISL) with the binary requirement imposed as a design constraint. By using the block coordinate descent (BCD) framework, we propose an efficient monotonic algorithm based on Fast Fourier Transform (FFT), to minimize the objective function which is non-convex and NP-hard in general. Simulation results illustrate that the ISL of designed binary set of sequences is the neighborhood of the Welch bound, Indicating its superior performance.},\n  keywords = {binary sequences;computational complexity;correlation methods;fast Fourier transforms;MIMO radar;cross-correlation properties;binary set;aperiodic properties;block coordinate descent framework;block coordinate descent framework;NP-hard;Welch bound;Fast Fourier Transform;efficient monotonic algorithm;design constraint;binary requirement;Integrated Side Lobe;Multiple-Input-Multiple-Output radar systems;MIMO radar systems;ISL;binary sequences set;MIMO radar;Signal processing algorithms;Optimization;Signal processing;Correlation;Discrete Fourier transforms;Binary Sequences Set;Block Coordinate Descent (BCD);Integrated Sidelobe Level (ISL);Multiple-Input-Multiple-Output (MIMO);Radar Waveform Design},\n  doi = {10.23919/EUSIPCO.2018.8553434},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437255.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we aim at designing a set of binary sequences with good aperiodic auto- and crosscorrelation properties for Multiple-Input-Multiple-Output (MIMO) radar systems. We show such a set of sequences can be obtained by minimizing the Integrated Side Lobe (ISL) with the binary requirement imposed as a design constraint. By using the block coordinate descent (BCD) framework, we propose an efficient monotonic algorithm based on Fast Fourier Transform (FFT), to minimize the objective function which is non-convex and NP-hard in general. Simulation results illustrate that the ISL of designed binary set of sequences is the neighborhood of the Welch bound, Indicating its superior performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multifractal Characterization for Bivariate Data.\n \n \n \n \n\n\n \n Leonarduzzi, R.; Abry, P.; Roux, S.; Wendt, H.; Jaffard, S.; and Seuret, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1347-1351, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MultifractalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553435,\n  author = {R. Leonarduzzi and P. Abry and S. Roux and H. Wendt and S. Jaffard and S. Seuret},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multifractal Characterization for Bivariate Data},\n  year = {2018},\n  pages = {1347-1351},\n  abstract = {Multifractal analysis is a reference tool for the analysis of data based on local regularity, which has been proven useful in an increasing number of applications. However, in its current formulation, it remains a fundamentally univariate tool, while being confronted with multivariate data in an increasing number of applications. Recent contributions have explored a first multivariate theoretical grounding for multi fractal analysis and shown that it can be effective in capturing and quantifying transient higher-order dependence beyond correlation. Building on these first fundamental contributions, this work proposes and studies the use of a quadratic model for the joint multi fractal spectrum of bivariate time series. We obtain expressions for the Pearson correlation in terms of the random walk and a multifractal cascade dependence parameters under this model, provide complete expressions for the multifractal parameters and propose a transformation of these parameters into natural coordinates that allows to effectively summarize the information they convey. Finally, we propose estimators for these parameters and assess their statistical performance through numerical simulations. The results indicate that the bivariate multi fractal parameter estimates are accurate and effective in quantifying non-linear, higher-order dependencies between time series.},\n  keywords = {data analysis;fractals;parameter estimation;random processes;statistical analysis;time series;quadratic model;joint multifractal spectrum;bivariate time series;Pearson correlation;multifractal cascade dependence parameters;complete expressions;multifractal parameters;bivariate multifractal parameter estimates;higher-order dependencies;multifractal characterization;bivariate data;multifractal analysis;reference tool;local regularity;fundamentally univariate tool;multivariate data;multivariate theoretical grounding;transient higher-order dependence;fundamental contributions;Fractals;Correlation;Signal processing;Tools;Wavelet transforms;Analytical models;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553435},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435463.pdf},\n}\n\n
\n
\n\n\n
\n Multifractal analysis is a reference tool for the analysis of data based on local regularity, which has been proven useful in an increasing number of applications. However, in its current formulation, it remains a fundamentally univariate tool, while being confronted with multivariate data in an increasing number of applications. Recent contributions have explored a first multivariate theoretical grounding for multi fractal analysis and shown that it can be effective in capturing and quantifying transient higher-order dependence beyond correlation. Building on these first fundamental contributions, this work proposes and studies the use of a quadratic model for the joint multi fractal spectrum of bivariate time series. We obtain expressions for the Pearson correlation in terms of the random walk and a multifractal cascade dependence parameters under this model, provide complete expressions for the multifractal parameters and propose a transformation of these parameters into natural coordinates that allows to effectively summarize the information they convey. Finally, we propose estimators for these parameters and assess their statistical performance through numerical simulations. The results indicate that the bivariate multi fractal parameter estimates are accurate and effective in quantifying non-linear, higher-order dependencies between time series.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Factorization Approach to Smoothing of Hidden Reciprocal Models.\n \n \n \n \n\n\n \n Carli, F. P.; and Carli, A. C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1122-1126, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553436,\n  author = {F. P. Carli and A. C. Carli},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Factorization Approach to Smoothing of Hidden Reciprocal Models},\n  year = {2018},\n  pages = {1122-1126},\n  abstract = {Acausal signals are ubiquitous in science and engineering. These processes are usually indexed by space, instead of time. Similarly to Markov processes, reciprocal processes (RPs) are defined in terms of conditional independence relations, which imply a rich sparsity structure for this class of models. In particular, the smoothing problem for Gaussian RPs can be traced back to the problem of solving a linear system with a cyclic block tridiagonal matrix as coefficient matrix. In this paper we propose two factorization techniques for the solution of the smoothing problem for Gaussian hidden reciprocal models (HRMs). The first method relies on a clever split of the problem in two subsystems where the matrices to be inverted are positive definite block tridiagonal matrices. We can thus rely on the rich literature for this kind of sparse matrices to devise an iterative procedure for the solution of the problem. The second approach, applies to scalar valued stationary reciprocal processes, in which case the coefficient matrix becomes circulant tridiagonal (and symmetric), and is based on the direct factorization of the coefficient matrix into the product of a circulant lower bidiagonal matrix and a circulant upper bidiagonal matrix. The computational complexity of both algorithms scales linearly with the length of the observation interval.},\n  keywords = {computational complexity;Gaussian processes;iterative methods;linear systems;Markov processes;matrix algebra;signal processing;sparse matrices;sparse matrices;stationary reciprocal processes;coefficient matrix;circulant tridiagonal;circulant lower bidiagonal matrix;circulant upper bidiagonal matrix;factorization approach;acausal signals;Markov processes;conditional independence relations;rich sparsity structure;smoothing problem;linear system;cyclic block tridiagonal matrix;factorization techniques;Gaussian hidden reciprocal models;positive definite block tridiagonal matrices;Gaussian RP;Signal processing algorithms;Mathematical model;Smoothing methods;Hidden Markov models;Symmetric matrices;Signal processing;Markov processes;Markov processes;acausal models;reciprocal processes;hidden Markov models;inference and learning;signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553436},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439452.pdf},\n}\n\n
\n
\n\n\n
\n Acausal signals are ubiquitous in science and engineering. These processes are usually indexed by space, instead of time. Similarly to Markov processes, reciprocal processes (RPs) are defined in terms of conditional independence relations, which imply a rich sparsity structure for this class of models. In particular, the smoothing problem for Gaussian RPs can be traced back to the problem of solving a linear system with a cyclic block tridiagonal matrix as coefficient matrix. In this paper we propose two factorization techniques for the solution of the smoothing problem for Gaussian hidden reciprocal models (HRMs). The first method relies on a clever split of the problem in two subsystems where the matrices to be inverted are positive definite block tridiagonal matrices. We can thus rely on the rich literature for this kind of sparse matrices to devise an iterative procedure for the solution of the problem. The second approach, applies to scalar valued stationary reciprocal processes, in which case the coefficient matrix becomes circulant tridiagonal (and symmetric), and is based on the direct factorization of the coefficient matrix into the product of a circulant lower bidiagonal matrix and a circulant upper bidiagonal matrix. The computational complexity of both algorithms scales linearly with the length of the observation interval.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Constrained Particle Filter for Improving Kinect Based Measurements.\n \n \n \n \n\n\n \n Tripathy, S. R.; Chakravarty, K.; and Sinha, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 306-310, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ConstrainedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553437,\n  author = {S. R. Tripathy and K. Chakravarty and A. Sinha},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Constrained Particle Filter for Improving Kinect Based Measurements},\n  year = {2018},\n  pages = {306-310},\n  abstract = {Microsoft Kinect has been gaining popularity in home-based rehabilitation solution due to its affordability and ease of use. It is used as a marker less human skeleton tracking device. However, apart from the fact that the skeleton data are contaminated with high frequency noise, the major drawback lies in the inability to retain the antropometric properties, like the body segments' length, which varies with time during the tracking. In this paper, a particle filter based approach has been proposed to track the human skeleton data in the presence of high frequency noise and multi-objective genetic algorithm is employed to reduce the bone length variations. In our approach multiple segments in skeleton are filtered simultaneously and segments' lengths are preserved by considering their interconnection unlike other methods in available literature. The proposed algorithm has achieved MAPE of 3.44% in maintaining the body segment length close to the ground truth and outperforms state-of-the-art methods.},\n  keywords = {biomechanics;bone;genetic algorithms;image motion analysis;image sensors;medical image processing;object tracking;particle filtering (numerical methods);patient rehabilitation;constrained particle filter;kinect based measurements;home-based rehabilitation solution;human skeleton tracking device;high frequency noise;antropometric properties;particle filter based approach;human skeleton data;bone length variations;body segment length;Microsoft kinect;multiobjective genetic algorithm;Bones;Joints;Radar tracking;Noise measurement;Signal processing algorithms;Tracking;Kinect;Particle filter;NSGA;Multi objective optimization},\n  doi = {10.23919/EUSIPCO.2018.8553437},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438216.pdf},\n}\n\n
\n
\n\n\n
\n Microsoft Kinect has been gaining popularity in home-based rehabilitation solution due to its affordability and ease of use. It is used as a marker less human skeleton tracking device. However, apart from the fact that the skeleton data are contaminated with high frequency noise, the major drawback lies in the inability to retain the antropometric properties, like the body segments' length, which varies with time during the tracking. In this paper, a particle filter based approach has been proposed to track the human skeleton data in the presence of high frequency noise and multi-objective genetic algorithm is employed to reduce the bone length variations. In our approach multiple segments in skeleton are filtered simultaneously and segments' lengths are preserved by considering their interconnection unlike other methods in available literature. The proposed algorithm has achieved MAPE of 3.44% in maintaining the body segment length close to the ground truth and outperforms state-of-the-art methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DeepMQ: A Deep Learning Approach Based Myelin Quantification in Microscopic Fluorescence Images.\n \n \n \n \n\n\n \n Çimen, S.; Çapar, A.; Ekinci, D. A.; Ayten, U. E.; Kerman, B. E.; and Töreyin, B. U.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 61-65, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DeepMQ:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553438,\n  author = {S. Çimen and A. Çapar and D. A. Ekinci and U. E. Ayten and B. E. Kerman and B. U. Töreyin},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {DeepMQ: A Deep Learning Approach Based Myelin Quantification in Microscopic Fluorescence Images},\n  year = {2018},\n  pages = {61-65},\n  abstract = {Oligodendrocytes wrap around the axons and form the myelin. Myelin facilitates rapid neural signal transmission. Any damage to myelin disrupts neuronal communication leading to neurological diseases such as multiple sclerosis (MS). There is no cure for MS. This is, in part, due to lack of an efficient method for myelin quantification during drug screening. In this study, an image analysis based myelin sheath detection method, DeepMQ, is developed. The method consists of a feature extraction step followed by a deep learning based binary classification module. The images, which were acquired on a confocal microscope contain three channels and multiple z-sections. Each channel represents either oligodendroyctes, neurons, or nuclei. During feature extraction, 26-neighbours of each voxel is mapped onto a 2D feature image. This image is, then, fed to the deep learning classifier, in order to detect myelin. Results indicate that 93.38 % accuracy is achieved in a set of fluorescence microscope images of mouse stem cell-derived oligodendroyctes and neurons. To the best of authors' knowledge, this is the first study utilizing image analysis along with machine learning techniques to quantify myelination.},\n  keywords = {biomedical optical imaging;brain;cellular biophysics;diseases;feature extraction;fluorescence;image classification;learning (artificial intelligence);medical image processing;neurophysiology;optical microscopy;neuronal communication;axons;neurological diseases;multiple sclerosis;image analysis based myelin sheath detection method;deep learning based binary classification module;confocal microscopy;mouse stem cell-derived oligodendroyctes;machine learning techniques;myelination;fluorescence microscope images;deep learning classifier;2D feature image;neurons;feature extraction step;MS;rapid neural signal transmission;microscopic fluorescence images;myelin quantification;deep learning approach;DeepMQ;Microscopy;Axons;Image analysis;Support vector machines;myelin;microscopic fluorescence imaging;neural network;deep learning;LeNet},\n  doi = {10.23919/EUSIPCO.2018.8553438},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437705.pdf},\n}\n\n
\n
\n\n\n
\n Oligodendrocytes wrap around the axons and form the myelin. Myelin facilitates rapid neural signal transmission. Any damage to myelin disrupts neuronal communication leading to neurological diseases such as multiple sclerosis (MS). There is no cure for MS. This is, in part, due to lack of an efficient method for myelin quantification during drug screening. In this study, an image analysis based myelin sheath detection method, DeepMQ, is developed. The method consists of a feature extraction step followed by a deep learning based binary classification module. The images, which were acquired on a confocal microscope contain three channels and multiple z-sections. Each channel represents either oligodendroyctes, neurons, or nuclei. During feature extraction, 26-neighbours of each voxel is mapped onto a 2D feature image. This image is, then, fed to the deep learning classifier, in order to detect myelin. Results indicate that 93.38 % accuracy is achieved in a set of fluorescence microscope images of mouse stem cell-derived oligodendroyctes and neurons. To the best of authors' knowledge, this is the first study utilizing image analysis along with machine learning techniques to quantify myelination.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards Automatic Detection of Animals in Camera-Trap Images.\n \n \n \n \n\n\n \n Loos, A.; Weigel, C.; and Koehler, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1805-1809, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553439,\n  author = {A. Loos and C. Weigel and M. Koehler},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Towards Automatic Detection of Animals in Camera-Trap Images},\n  year = {2018},\n  pages = {1805-1809},\n  abstract = {In recent years the world's biodiversity is declining on an unprecedented scale. Many species are endangered and remaining populations need to be protected. To overcome this agitating issue, biologist started to use remote camera devices for wildlife monitoring and estimation of remaining population sizes. Unfortunately, the huge amount of data makes the necessary manual analysis extremely tedious and highly cost intensive. In this paper we re-train and apply two state-of-the-art deep-learning based object detectors to localize and classify Serengeti animals in camera-trap images. Furthermore, we thoroughly evaluate both algorithms on a self-established dataset and show that the combination of the results of both detectors can enhance overall mean average precision. In contrast to previous work our approach is not only capable of classifying the main species in images but can also detect them and therefore count the number of individuals which is in fact an important information for biologists, ecologists, and wildlife epidemiologists.},\n  keywords = {biology computing;cameras;computer vision;image classification;learning (artificial intelligence);object detection;object recognition;zoology;camera-trap images;remote camera devices;wildlife monitoring;population sizes;Serengeti animals;main species;deep-learning based object detectors;Animals;Detectors;Cameras;Metadata;Sociology;Statistics},\n  doi = {10.23919/EUSIPCO.2018.8553439},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570431189.pdf},\n}\n\n
\n
\n\n\n
\n In recent years the world's biodiversity is declining on an unprecedented scale. Many species are endangered and remaining populations need to be protected. To overcome this agitating issue, biologist started to use remote camera devices for wildlife monitoring and estimation of remaining population sizes. Unfortunately, the huge amount of data makes the necessary manual analysis extremely tedious and highly cost intensive. In this paper we re-train and apply two state-of-the-art deep-learning based object detectors to localize and classify Serengeti animals in camera-trap images. Furthermore, we thoroughly evaluate both algorithms on a self-established dataset and show that the combination of the results of both detectors can enhance overall mean average precision. In contrast to previous work our approach is not only capable of classifying the main species in images but can also detect them and therefore count the number of individuals which is in fact an important information for biologists, ecologists, and wildlife epidemiologists.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Multi-Class Covariance Matrix Filtering for Adaptive Environment Learning.\n \n \n \n \n\n\n \n Braca, P.; Aubry, A.; Millefiori, L. M.; De Maio, A.; and Marano, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 266-270, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553440,\n  author = {P. Braca and A. Aubry and L. M. Millefiori and A. {De Maio} and S. Marano},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Multi-Class Covariance Matrix Filtering for Adaptive Environment Learning},\n  year = {2018},\n  pages = {266-270},\n  abstract = {Covariance matrix estimation is a crucial task in adaptive signal processing applied to several surveillance systems, including radar and sonar. In this paper we propose a dynamic environment learning strategy to track both the covariance matrix and its class; the class represents a set of structured covariance matrices. We assume that the posterior distribution of the covariance given the class, is basically a mixture of inverse Wishart, while the class posterior distribution evolves according to a Markov chain. The proposed multi-class inverse Wishart mixture filter is shown to outperform the class-clairvoyant maximum likelihood estimator in terms of covariance estimate accuracy, as well as the Bayesian information criterion rule in terms of classification performance.},\n  keywords = {adaptive signal processing;Bayes methods;covariance analysis;covariance matrices;filtering theory;Gaussian processes;learning (artificial intelligence);Markov processes;maximum likelihood estimation;adaptive signal processing;Markov chain;covariance estimate accuracy;class-clairvoyant maximum likelihood estimator;multiclass inverse Wishart mixture filter;class posterior distribution;structured covariance matrices;dynamic environment;sonar;surveillance systems;adaptive signal;covariance matrix estimation;adaptive environment learning;Bayesian multiclass covariance matrix filtering;Covariance matrices;Signal processing algorithms;Estimation;Markov processes;Interference;Bayes methods;Adaptive signal processing;Model classification;adaptive filter;covariance estimation;adaptive signal processing;Bayesian information criterion;multi-class inverse Wishart mixture filter},\n  doi = {10.23919/EUSIPCO.2018.8553440},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437234.pdf},\n}\n\n
\n
\n\n\n
\n Covariance matrix estimation is a crucial task in adaptive signal processing applied to several surveillance systems, including radar and sonar. In this paper we propose a dynamic environment learning strategy to track both the covariance matrix and its class; the class represents a set of structured covariance matrices. We assume that the posterior distribution of the covariance given the class, is basically a mixture of inverse Wishart, while the class posterior distribution evolves according to a Markov chain. The proposed multi-class inverse Wishart mixture filter is shown to outperform the class-clairvoyant maximum likelihood estimator in terms of covariance estimate accuracy, as well as the Bayesian information criterion rule in terms of classification performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Transfer Learning for SSVEP-based BCI Using Riemannian Similarities Between Users.\n \n \n \n \n\n\n \n Kalunga, E. K.; Chevallier, S.; and Barthélemy, Q.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1685-1689, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TransferPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553441,\n  author = {E. K. Kalunga and S. Chevallier and Q. Barthélemy},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Transfer Learning for SSVEP-based BCI Using Riemannian Similarities Between Users},\n  year = {2018},\n  pages = {1685-1689},\n  abstract = {Brain-Computer Interfaces (BCI) face a great challenge: how to harness the wide variability of brain signals from a user to another. The most visible problem is the lack of a sound framework to capture the specificity of a user brain waves. A first attempt to leverage this issue is to design user-specific spatial filters, carefully adjusted with a lengthy calibration phase. A second, more recent, opening is the systematic study of brain signals through their covariance, in an appropriate space from a geometric point of view. Riemannian geometry allows to efficiently characterize the variability of inter-subject EEG, even with noisy or scarce data. This contribution is the first attempt for SSVEP-based BCI to make the most of the available data from a user, relying on Riemannian geometry to estimate the similarity with a multi-user dataset. The proposed method is built in the framework of transfer learning and borrows the notion of composite mean to partition the space. This method is evaluated on 12 subjects performing an SSVEP task for the control of an exoskeleton arm and the results show the contribution of Riemannian geometry and of the user-specific composite mean, whereas there is only a few data available for a subject.},\n  keywords = {brain;brain-computer interfaces;electroencephalography;geometry;learning (artificial intelligence);medical signal processing;spatial filters;visual evoked potentials;scarce data;SSVEP-based BCI;Riemannian geometry;multiuser dataset;transfer learning;SSVEP task;user-specific composite mean;Riemannian similarities;Brain-Computer Interfaces;brain signals;sound framework;specificity;user brain waves;user-specific spatial filters;lengthy calibration phase;inter-subject EEG;noisy data;Covariance matrices;Electroencephalography;Calibration;Geometry;Brain modeling;Manifolds;Signal processing;Brain-computer interface;transfer learning;Riemannian geometry;SSVEP},\n  doi = {10.23919/EUSIPCO.2018.8553441},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437273.pdf},\n}\n\n
\n
\n\n\n
\n Brain-Computer Interfaces (BCI) face a great challenge: how to harness the wide variability of brain signals from a user to another. The most visible problem is the lack of a sound framework to capture the specificity of a user brain waves. A first attempt to leverage this issue is to design user-specific spatial filters, carefully adjusted with a lengthy calibration phase. A second, more recent, opening is the systematic study of brain signals through their covariance, in an appropriate space from a geometric point of view. Riemannian geometry allows to efficiently characterize the variability of inter-subject EEG, even with noisy or scarce data. This contribution is the first attempt for SSVEP-based BCI to make the most of the available data from a user, relying on Riemannian geometry to estimate the similarity with a multi-user dataset. The proposed method is built in the framework of transfer learning and borrows the notion of composite mean to partition the space. This method is evaluated on 12 subjects performing an SSVEP task for the control of an exoskeleton arm and the results show the contribution of Riemannian geometry and of the user-specific composite mean, whereas there is only a few data available for a subject.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Beamformer Design Method for Multi-Group Multicasting by Enforcing Constructive Interference.\n \n \n \n \n\n\n \n Demir, Ö. T.; and Tuncer, T. E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 632-636, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553442,\n  author = {Ö. T. Demir and T. E. Tuncer},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Beamformer Design Method for Multi-Group Multicasting by Enforcing Constructive Interference},\n  year = {2018},\n  pages = {632-636},\n  abstract = {In this paper, we propose a new multi-group multicast beamforming design method for phase shift keying (PSK) modulated signals. Quality of service (QoS)-aware optimization is considered where the aim is to minimize transmission power of multiple-antenna base station under the QoS constraints of single-antenna users. In this paper, we show that symbol-level beamforming scheme proposed in the literature is not an effective design method for multi-group multicasting and modify it using rotated constellation approach in order to reduce transmission power. Proposed method enforces the known interference in a constructive manner such that the received symbol at each user is inside the correct decision region for any set of symbols. Hence, designed beamformers can be utilized throughout a transmission-frame rather than symbol-by-symbol basis. An alternating direction method of multipliers (ADMM) algorithm is presented for the proposed design problem and closed-form update equations are derived for the steps of the ADMM algorithm. Simulation results show that the proposed method decreases the transmission power significantly compared to the conventional and symbol-level beamforming.},\n  keywords = {antenna arrays;array signal processing;interference (signal);optimisation;phase shift keying;quality of service;symbol-by-symbol basis;multiple-antenna base station;single-antenna users;constructive interference;symbol-level beamforming design method;multigroup multicast beamforming design method;phase shift keying modulated signals;PSK modulated signals;quality of service-aware optimization;QoS-aware optimization;transmission power minimisation;rotated constellation approach;ADMM algorithm;alternating direction method of multiplier algorithm;Interference;Array signal processing;Multicast communication;Phase shift keying;Quality of service;Multicast algorithms;Signal processing algorithms;Multi-group multicast beamforming;ADMM;constructive interference},\n  doi = {10.23919/EUSIPCO.2018.8553442},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437711.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a new multi-group multicast beamforming design method for phase shift keying (PSK) modulated signals. Quality of service (QoS)-aware optimization is considered where the aim is to minimize transmission power of multiple-antenna base station under the QoS constraints of single-antenna users. In this paper, we show that symbol-level beamforming scheme proposed in the literature is not an effective design method for multi-group multicasting and modify it using rotated constellation approach in order to reduce transmission power. Proposed method enforces the known interference in a constructive manner such that the received symbol at each user is inside the correct decision region for any set of symbols. Hence, designed beamformers can be utilized throughout a transmission-frame rather than symbol-by-symbol basis. An alternating direction method of multipliers (ADMM) algorithm is presented for the proposed design problem and closed-form update equations are derived for the steps of the ADMM algorithm. Simulation results show that the proposed method decreases the transmission power significantly compared to the conventional and symbol-level beamforming.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Track-to-Graph Association for Maritime Traffic Monitoring.\n \n \n \n \n\n\n \n Grasso, R.; Millefiori, L. M.; and Braca, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1042-1046, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553443,\n  author = {R. Grasso and L. M. Millefiori and P. Braca},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Track-to-Graph Association for Maritime Traffic Monitoring},\n  year = {2018},\n  pages = {1042-1046},\n  abstract = {We present a hypothesis test to associate ship track measurements to an edge of a given graph that statistically models common traffic routes in a given area of interest. The association algorithm is based on the hypothesis that ship velocities are modeled by mean-reverting stochastic processes. Prior knowledge about the traffic is provided by the graph in form of probability density functions of the mean-reverting kinematic parameters for each node and edge of the graph, that are exploited in the formalization of the association algorithm. Tests on real Automatic Identification System (AIS) data show a qualitatively good association performance. Future developments of this work include the development of specific quantitative metrics to assess the association performance.},\n  keywords = {Bayes methods;graph theory;marine engineering;ships;statistical analysis;stochastic processes;traffic information systems;bayesian track-to-graph association;maritime traffic monitoring;ship track measurements;ship velocities;mean-reverting stochastic processes;probability density functions;mean-reverting kinematic parameters;statistical models;automatic identification system;Marine vehicles;Kinematics;Probability density function;Artificial intelligence;Trajectory;Europe;Signal processing;Maritime surveillance;knowledge based tracking and prediction;statistical track association;graphs;mean-reverting stochastic processes},\n  doi = {10.23919/EUSIPCO.2018.8553443},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437786.pdf},\n}\n\n
\n
\n\n\n
\n We present a hypothesis test to associate ship track measurements to an edge of a given graph that statistically models common traffic routes in a given area of interest. The association algorithm is based on the hypothesis that ship velocities are modeled by mean-reverting stochastic processes. Prior knowledge about the traffic is provided by the graph in form of probability density functions of the mean-reverting kinematic parameters for each node and edge of the graph, that are exploited in the formalization of the association algorithm. Tests on real Automatic Identification System (AIS) data show a qualitatively good association performance. Future developments of this work include the development of specific quantitative metrics to assess the association performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Target Enumeration and Labeling Using Radar Data of Human Gait.\n \n \n \n \n\n\n \n Teklehaymanot, F. K.; Seifert, A.; Muma, M.; Amin, M. G.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1342-1346, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553444,\n  author = {F. K. Teklehaymanot and A. Seifert and M. Muma and M. G. Amin and A. M. Zoubir},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Target Enumeration and Labeling Using Radar Data of Human Gait},\n  year = {2018},\n  pages = {1342-1346},\n  abstract = {Estimating the number of clusters in an observed data set poses a major challenge in cluster analysis. In the literature, the original Bayesian Information Criterion (BIC) is used as a criterion for cluster enumeration. However, the original BIC is a generic approach that does not take the data structure of the clustering problem into account. Recently, a new BIC for cluster analysis has been derived from first principles by treating the cluster enumeration problem as a maximization of the posterior probability of candidate models given data. Based on the new BIC for cluster analysis, we propose a target enumeration and labeling algorithm. The proposed algorithm is unsupervised in the sense that it requires neither knowledge on the number of clusters nor training data. Experimental results based on real radar data of human gait show that the proposed method is able to correctly estimate the number of observed persons and, at the same time, provide labels to them with high accuracy. It is shown that, in terms of cluster enumeration performance, the proposed algorithm outperforms an existing cluster enumeration method.},\n  keywords = {Bayes methods;data structures;gait analysis;pattern clustering;probability;radar data;original Bayesian Information Criterion;original BIC;data structure;target enumeration;labeling algorithm;training data;human gait show;cluster enumeration performance;Radar;Labeling;Clustering algorithms;Signal processing algorithms;Bayes methods;Data models;Radar antennas},\n  doi = {10.23919/EUSIPCO.2018.8553444},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439303.pdf},\n}\n\n
\n
\n\n\n
\n Estimating the number of clusters in an observed data set poses a major challenge in cluster analysis. In the literature, the original Bayesian Information Criterion (BIC) is used as a criterion for cluster enumeration. However, the original BIC is a generic approach that does not take the data structure of the clustering problem into account. Recently, a new BIC for cluster analysis has been derived from first principles by treating the cluster enumeration problem as a maximization of the posterior probability of candidate models given data. Based on the new BIC for cluster analysis, we propose a target enumeration and labeling algorithm. The proposed algorithm is unsupervised in the sense that it requires neither knowledge on the number of clusters nor training data. Experimental results based on real radar data of human gait show that the proposed method is able to correctly estimate the number of observed persons and, at the same time, provide labels to them with high accuracy. It is shown that, in terms of cluster enumeration performance, the proposed algorithm outperforms an existing cluster enumeration method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Information Subspace-Based Fusion for Vehicle Classification.\n \n \n \n \n\n\n \n Ghanem, S.; Panahi, A.; Krim, H.; Kerekes, R. A.; and Mattingly, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1612-1616, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"InformationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553445,\n  author = {S. Ghanem and A. Panahi and H. Krim and R. A. Kerekes and J. Mattingly},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Information Subspace-Based Fusion for Vehicle Classification},\n  year = {2018},\n  pages = {1612-1616},\n  abstract = {Union of Subspaces (UoS) is a new paradigm for signal modeling and processing, which is capable of identifying more complex trends in data sets than simple linear models. Relying on a bi-sparsity pursuit framework and advanced nonsmooth optimization techniques, the Robust Subspace Recovery (RoSuRe) algorithm was introduced in the recent literature as a reliable and numerically efficient algorithm to unfold unions of subspaces. In this study, we apply RoSuRe to prospect the structure of a data type (e.g. sensed data on vehicle through passive audio and magnetic observations). Applying RoSuRe to the observation data set, we obtain a new representation of the time series, respecting an underlying UoS model. We subsequently employ Spectral Clustering on the new representations of the data set. The classification performance on the dataset shows a considerable improvement compared to direct application of other unsupervised clustering methods.},\n  keywords = {optimisation;pattern classification;pattern clustering;signal classification;vehicle classification;signal modeling;complex trends;simple linear models;bi-sparsity pursuit framework;nonsmooth optimization techniques;RoSuRe;reliable algorithm;numerically efficient algorithm;sensed data;passive audio;magnetic observations;observation data set;underlying UoS model;classification performance;information subspace-based fusion;robust subspace recovery algorithm;Magnetometers;Magnetoacoustic effects;Sparse matrices;Feature extraction;Signal processing algorithms;Magnetic resonance imaging;Microphones;Sparse learning;Classification;Magnetic sensors;Acoustics},\n  doi = {10.23919/EUSIPCO.2018.8553445},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439108.pdf},\n}\n\n
\n
\n\n\n
\n Union of Subspaces (UoS) is a new paradigm for signal modeling and processing, which is capable of identifying more complex trends in data sets than simple linear models. Relying on a bi-sparsity pursuit framework and advanced nonsmooth optimization techniques, the Robust Subspace Recovery (RoSuRe) algorithm was introduced in the recent literature as a reliable and numerically efficient algorithm to unfold unions of subspaces. In this study, we apply RoSuRe to prospect the structure of a data type (e.g. sensed data on vehicle through passive audio and magnetic observations). Applying RoSuRe to the observation data set, we obtain a new representation of the time series, respecting an underlying UoS model. We subsequently employ Spectral Clustering on the new representations of the data set. The classification performance on the dataset shows a considerable improvement compared to direct application of other unsupervised clustering methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Towards understanding the effects of practice on behavioural biometric recognition performance.\n \n \n \n \n\n\n \n Haasnoot, E.; Barnhoorrr, J. S.; Spreeuwers, L. J.; Veldhuis, R. N. J.; and Verwey, W. B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 558-562, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TowardsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553446,\n  author = {E. Haasnoot and J. S. Barnhoorrr and L. J. Spreeuwers and R. N. J. Veldhuis and W. B. Verwey},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Towards understanding the effects of practice on behavioural biometric recognition performance},\n  year = {2018},\n  pages = {558-562},\n  abstract = {Behavioural biometrics looks at discriminative features of a person's measurable behaviour, which is known to show high variance over long stretches of time. In psychology, a significant portion of this behavioural variance is explained by an individual improving their skill at performing behaviours, mostly through practice. Understanding what the effects of practice are on biometric recognition performance should allow us to account for much of this variance, as well as make individual behavioural biometric studies easier to compare [15]. We hypothesize that more accumulated practice will lead to both more stable and increased recognition performance. We argue that these are significant effects and show that practice in general is under-investigated. We introduce a novel method of analysis, the Start-to-Train Interval (STI)/Train-to-Test Interval (TTI) contour plot, which allows for systematic investigation of how recognition performance develops under increased practice. We applied this method to three data sets of a Discrete Sequence Production (DSP) task, a task that consists of repeatedly (500+ times) typing in a simple password, and found that more practice both significantly increases recognition performance and makes it more stable. These findings call for further investigation into the effects of practice on recognition performance for more standard behavioural biometric paradigms.},\n  keywords = {biometrics (access control);feature extraction;psychology;behavioural biometric recognition performance;discriminative features;behavioural variance;standard behavioural biometric paradigms;Start-to-Train Interval contour plot;persons measurable behaviour;psychology;STI;Train-to-Test Interval contour plot;TTI;discrete sequence production task;DSP;Task analysis;Password;Training;Europe;Signal processing;Shape;Psychology},\n  doi = {10.23919/EUSIPCO.2018.8553446},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438269.pdf},\n}\n\n
\n
\n\n\n
\n Behavioural biometrics looks at discriminative features of a person's measurable behaviour, which is known to show high variance over long stretches of time. In psychology, a significant portion of this behavioural variance is explained by an individual improving their skill at performing behaviours, mostly through practice. Understanding what the effects of practice are on biometric recognition performance should allow us to account for much of this variance, as well as make individual behavioural biometric studies easier to compare [15]. We hypothesize that more accumulated practice will lead to both more stable and increased recognition performance. We argue that these are significant effects and show that practice in general is under-investigated. We introduce a novel method of analysis, the Start-to-Train Interval (STI)/Train-to-Test Interval (TTI) contour plot, which allows for systematic investigation of how recognition performance develops under increased practice. We applied this method to three data sets of a Discrete Sequence Production (DSP) task, a task that consists of repeatedly (500+ times) typing in a simple password, and found that more practice both significantly increases recognition performance and makes it more stable. These findings call for further investigation into the effects of practice on recognition performance for more standard behavioural biometric paradigms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Modeling Time of Arrival Probability Distribution and TDOA Bias in Acoustic Emission Testing.\n \n \n \n \n\n\n \n Junior, C. A. P.; Nascimento, V. H.; and Lopes, C. G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1117-1121, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ModelingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553447,\n  author = {C. A. P. Junior and V. H. Nascimento and C. G. Lopes},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Modeling Time of Arrival Probability Distribution and TDOA Bias in Acoustic Emission Testing},\n  year = {2018},\n  pages = {1117-1121},\n  abstract = {Acoustic emission testing is widely used by industry to detect and localize faults in structures, but estimated source positions often show significant bias in real tests as a consequence of Time Difference of Arrival (TDOA) bias. In this work, a model for TDOA bias is developed considering the time of arrival was estimated using the fixed threshold algorithm, as well as theoretical upper and lower bounds for it. In addition, we derive the time of arrival probability distribution function in terms of the noise distribution and acoustic emission waveform for the fixed threshold algorithm, showing that, contrary to usual practice, it in general cannot be well approximated by a Gaussian distribution.},\n  keywords = {acoustic emission testing;fault location;Gaussian distribution;statistical distributions;structural engineering;time-of-arrival estimation;TDOA bias;acoustic emission testing;noise distribution;Gaussian distribution;time of arrival probability distribution;fault location;structural fault detection;Sensors;Probability distribution;Testing;Approximation algorithms;Attenuation;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553447},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439346.pdf},\n}\n\n
\n
\n\n\n
\n Acoustic emission testing is widely used by industry to detect and localize faults in structures, but estimated source positions often show significant bias in real tests as a consequence of Time Difference of Arrival (TDOA) bias. In this work, a model for TDOA bias is developed considering the time of arrival was estimated using the fixed threshold algorithm, as well as theoretical upper and lower bounds for it. In addition, we derive the time of arrival probability distribution function in terms of the noise distribution and acoustic emission waveform for the fixed threshold algorithm, showing that, contrary to usual practice, it in general cannot be well approximated by a Gaussian distribution.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Stream Cipher Based on Nonlinear Dynamic System.\n \n \n \n \n\n\n \n Mannai, O.; Becheikh, R.; and Rhouma, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 316-320, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553449,\n  author = {O. Mannai and R. Becheikh and R. Rhouma},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Stream Cipher Based on Nonlinear Dynamic System},\n  year = {2018},\n  pages = {316-320},\n  abstract = {In this paper, we introduce a new synchronous stream cipher. The core of the cipher is the Ikeda system which can be seen as a Nonlinear Feedback Shift Register (NLFSR) of length 7 plus one memory. The cipher takes 256-bit key as input and generates in each iteration an output of 16-bits. A single key is allowed to generate up to 264 output bits. A security analysis has been carried out and it has been showed that the output sequence produced by the scheme is pseudorandom in the sense that they cannot be distinguished from truly random sequence and resist to well-known stream cipher attacks.},\n  keywords = {cryptography;feedback;iterative methods;random sequences;shift registers;stream cipher attacks;new stream cipher;synchronous stream cipher;Ikeda system;NLFSR;length 7 plus one memory;security analysis;output sequence;nonlinear dynamic system;nonlinear feedback shift register;random sequence;Ciphers;Generators;Random sequences;Registers;Europe;Stream ciphers;NLFSR;Distinguishing attack;diffusion;confusion},\n  doi = {10.23919/EUSIPCO.2018.8553449},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434624.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we introduce a new synchronous stream cipher. The core of the cipher is the Ikeda system which can be seen as a Nonlinear Feedback Shift Register (NLFSR) of length 7 plus one memory. The cipher takes 256-bit key as input and generates in each iteration an output of 16-bits. A single key is allowed to generate up to 264 output bits. A security analysis has been carried out and it has been showed that the output sequence produced by the scheme is pseudorandom in the sense that they cannot be distinguished from truly random sequence and resist to well-known stream cipher attacks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Greedy Recovery of Sparse Signals with Dynamically Varying Support.\n \n \n \n \n\n\n \n Lim, S. H.; Yoo, J. H.; Kim, S.; and Choi, J. W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 578-582, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GreedyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553450,\n  author = {S. H. Lim and J. H. Yoo and S. Kim and J. W. Choi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Greedy Recovery of Sparse Signals with Dynamically Varying Support},\n  year = {2018},\n  pages = {578-582},\n  abstract = {In this paper, we propose a low-complexity greedy recovery algorithm which can recover sparse signals with time-varying support. We consider the scenario where the support of the signal (i.e., the indices of nonzero elements) varies smoothly with certain temporal correlation. We model the indices of support as discrete-state Markov random process. Then, we formulate the signal recovery problem as joint estimation of the set of the support indices and the amplitude of nonzero entries based on the multiple measurement vectors. We successively identify the element of the support based on the maximum a posteriori (MAP) criteria and subtract the reconstructed signal component for detection of the next element of the support. Our numerical evaluation shows that the proposed algorithm achieves satisfactory recovery performance at low computational complexity.},\n  keywords = {computational complexity;greedy algorithms;Markov processes;maximum likelihood estimation;numerical analysis;random processes;signal reconstruction;numerical evaluation;MAP criteria;maximum a posteriori criteria;discrete-state Markov random process;greedy sparse signal recovery problem;signal component reconstruction;time-varying support;low-complexity greedy recovery algorithm;low computational complexity;multiple measurement vectors;Signal processing algorithms;Computational complexity;Greedy algorithms;Linear programming;Europe;Signal processing;Estimation},\n  doi = {10.23919/EUSIPCO.2018.8553450},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437592.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a low-complexity greedy recovery algorithm which can recover sparse signals with time-varying support. We consider the scenario where the support of the signal (i.e., the indices of nonzero elements) varies smoothly with certain temporal correlation. We model the indices of support as discrete-state Markov random process. Then, we formulate the signal recovery problem as joint estimation of the set of the support indices and the amplitude of nonzero entries based on the multiple measurement vectors. We successively identify the element of the support based on the maximum a posteriori (MAP) criteria and subtract the reconstructed signal component for detection of the next element of the support. Our numerical evaluation shows that the proposed algorithm achieves satisfactory recovery performance at low computational complexity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Cassis: Characterization with Adaptive Sample- Size Inferential Statistics Applied to Inexact Circuits.\n \n \n \n\n\n \n Bonnot, J.; Camus, V.; Desnos, K.; and Menard, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 677-681, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553451,\n  author = {J. Bonnot and V. Camus and K. Desnos and D. Menard},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Cassis: Characterization with Adaptive Sample- Size Inferential Statistics Applied to Inexact Circuits},\n  year = {2018},\n  pages = {677-681},\n  abstract = {To design faster and more energy-efficient systems, numerous inexact arithmetic operators have been proposed, generally obtained by modifying the logic structure of conventional circuits. However, as the quality of service of an application has to be ensured, these operators need to be precisely characterized to be usable in commercial or real-life applications. The characterization of inexact operators is commonly achieved with exhaustive or random bit-accurate gate-level simulations. However, for high word lengths, the time and memory required for such simulations become prohibitive. Besides, when simulating a random sample, no confidence information is given on the precision of the characterization. To overcome these limitations, CASSIS, a new characterization method for inexact operators is proposed. By exploiting statistical properties of the approximation error, the number of simulations needed for precise characterization is drastically reduced. From user-defined confidence requirements, the proposed method computes the minimal number of simulations to obtain the desired accuracy on the characterization. For 32-bit adders, the CASSIS method reduces the number of simulations needed up to a few tens of thousands points.},\n  keywords = {adders;approximation theory;logic design;logic gates;statistical analysis;statistical properties;user-defined confidence requirements;CASSIS method;adaptive sample-size inferential statistics;inexact circuits;energy-efficient systems;logic structure;quality of service;exhaustive bit-accurate gate-level simulations;random bit-accurate gate-level simulations;confidence information;32-bit adders;Adders;Mathematical model;Estimation;Error analysis;Standards;Computational modeling;Sociology},\n  doi = {10.23919/EUSIPCO.2018.8553451},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n To design faster and more energy-efficient systems, numerous inexact arithmetic operators have been proposed, generally obtained by modifying the logic structure of conventional circuits. However, as the quality of service of an application has to be ensured, these operators need to be precisely characterized to be usable in commercial or real-life applications. The characterization of inexact operators is commonly achieved with exhaustive or random bit-accurate gate-level simulations. However, for high word lengths, the time and memory required for such simulations become prohibitive. Besides, when simulating a random sample, no confidence information is given on the precision of the characterization. To overcome these limitations, CASSIS, a new characterization method for inexact operators is proposed. By exploiting statistical properties of the approximation error, the number of simulations needed for precise characterization is drastically reduced. From user-defined confidence requirements, the proposed method computes the minimal number of simulations to obtain the desired accuracy on the characterization. For 32-bit adders, the CASSIS method reduces the number of simulations needed up to a few tens of thousands points.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Estimating Secret Parameters of a Random Number Generator from Time Series by Auto-synchronization.\n \n \n \n \n\n\n \n Ergün, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2569-2572, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EstimatingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553452,\n  author = {S. Ergün},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Estimating Secret Parameters of a Random Number Generator from Time Series by Auto-synchronization},\n  year = {2018},\n  pages = {2569-2572},\n  abstract = {A novel estimate system is proposed to discover the security weaknesses of a chaos based random number generator (RNG). Convergence of the estimate system is proved using auto-synchronization. Secret parameters of the target RNG are recovered where the available information are the structure of the RNG and a scalar time series observed from the target chaotic system. Simulation and numerical results verifying the feasibility of the estimate system are given such that, next bit can be predicted while the same output bit sequence of the RNG can be reGenerated.},\n  keywords = {chaos;random number generation;security of data;time series;secret parameters;random number generator;auto-synchronization;security weaknesses;target RNG;scalar time series;target chaotic system;Synchronization;Cryptography;Generators;Chaotic communication;Time series analysis;Estimation;security weaknesses;random number generator;continuous-time chaos;time series;synchronization of chaotic systems;auto-synchronization},\n  doi = {10.23919/EUSIPCO.2018.8553452},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437870.pdf},\n}\n\n
\n
\n\n\n
\n A novel estimate system is proposed to discover the security weaknesses of a chaos based random number generator (RNG). Convergence of the estimate system is proved using auto-synchronization. Secret parameters of the target RNG are recovered where the available information are the structure of the RNG and a scalar time series observed from the target chaotic system. Simulation and numerical results verifying the feasibility of the estimate system are given such that, next bit can be predicted while the same output bit sequence of the RNG can be reGenerated.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feature Trajectories Selection for Video Stabilization.\n \n \n \n \n\n\n \n Guilluy, W.; Oudre, L.; and Beghdadi, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 593-597, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FeaturePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553453,\n  author = {W. Guilluy and L. Oudre and A. Beghdadi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Feature Trajectories Selection for Video Stabilization},\n  year = {2018},\n  pages = {593-597},\n  abstract = {In this paper we present a new method to select the most relevant feature trajectories that could be used in video stabilization algorithms. The main objective is to identify the most appropriate trajectories of the video that could be used for the estimation of the camera motion. We use duration and motion criteria with a global rather than local approach for outlier rejection, thus avoiding the need for a known motion model. The performance of the proposed method is evaluated on several real videos and compared to the state-of-the-art using some intuitive subjective and objective criteria.},\n  keywords = {cameras;image motion analysis;video signal processing;feature trajectories selection;video stabilization;relevant feature trajectories;camera motion;duration;motion criteria;intuitive subjective criteria;objective criteria;Trajectory;Cameras;Tracking;Feature extraction;Estimation;Europe;Signal processing;video stabilization;video processing},\n  doi = {10.23919/EUSIPCO.2018.8553453},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436190.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we present a new method to select the most relevant feature trajectories that could be used in video stabilization algorithms. The main objective is to identify the most appropriate trajectories of the video that could be used for the estimation of the camera motion. We use duration and motion criteria with a global rather than local approach for outlier rejection, thus avoiding the need for a known motion model. The performance of the proposed method is evaluated on several real videos and compared to the state-of-the-art using some intuitive subjective and objective criteria.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised calibration of RGB-NIR capture pairs utilizing dense multimodal image correspondences.\n \n \n \n \n\n\n \n Gama, F.; Georgiev, M.; and Gotchev, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2145-2149, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"UnsupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553454,\n  author = {F. Gama and M. Georgiev and A. Gotchev},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Unsupervised calibration of RGB-NIR capture pairs utilizing dense multimodal image correspondences},\n  year = {2018},\n  pages = {2145-2149},\n  abstract = {In this paper, we propose an unsupervised calibration framework aimed at calibrating RGB plus Near-InfraRed (NIR) capture setups. We favour dense feature matching for the case of multimodal data and utilize the Scale-Invariant Feature Transform (SIFT) flow, previously developed for matching same-category image objects. We develop an optimization procedure that minimizes the global disparity field between the two multimodal images in order to adapt SIFT flow for our calibration needs. The proposed optimization substantially increases the number of inliers and yields more robust and unambiguous calibration results.},\n  keywords = {calibration;feature extraction;image colour analysis;image matching;optimisation;transforms;SIFT flow;unambiguous calibration results;optimization procedure;same-category image objects;multimodal data;dense feature matching;unsupervised calibration framework;dense multimodal image correspondences;RGB-NIR capture pairs;global disparity field minimization;RGB-plus-near-infrared capture setups;scale-invariant feature transform flow;Cameras;Calibration;Feature extraction;Matched filters;Optimization;Sensors;Genetic algorithms;NIR;calibration;SIFT flow;multimodal stereo;features matching},\n  doi = {10.23919/EUSIPCO.2018.8553454},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439453.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose an unsupervised calibration framework aimed at calibrating RGB plus Near-InfraRed (NIR) capture setups. We favour dense feature matching for the case of multimodal data and utilize the Scale-Invariant Feature Transform (SIFT) flow, previously developed for matching same-category image objects. We develop an optimization procedure that minimizes the global disparity field between the two multimodal images in order to adapt SIFT flow for our calibration needs. The proposed optimization substantially increases the number of inliers and yields more robust and unambiguous calibration results.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Visual Saliency Guided High Dynamic Range Image Compression.\n \n \n \n \n\n\n \n Feng, T.; and Abhayaratne, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 166-170, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"VisualPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553456,\n  author = {T. Feng and C. Abhayaratne},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Visual Saliency Guided High Dynamic Range Image Compression},\n  year = {2018},\n  pages = {166-170},\n  abstract = {Recent years have seen the emergence of the visual saliency-based image and video compression for low dynamic range (LDR) visual content. The high dynamic range (HDR) imaging is yet to follow such an approach for compression as the state-of-the-art visual saliency detection models are mainly concerned with LDR content. Although a few HDR saliency detection models have been proposed in the recent years, they lack the comprehensive validation. Current HDR image compression schemes do not differentiate salient and non-salient regions, which has been proved redundant in terms of the Human Visual System. In this paper, we propose a novel visual saliency guided layered compression scheme for HDR images. The proposed saliency detection model is robust and highly correlates with the ground truth saliency maps obtained from eye tracker. The results show a reduction of bit-rates up to 50% while retaining the same high visual quality in terms of HDR-Visual Difference Predictor (HDR-VDP) and the visual saliency-induced index for perceptual image quality assessment (VSI) metrics in the salient regions.},\n  keywords = {computer vision;data compression;image coding;object detection;Visual saliency guided high dynamic range image compression;visual saliency-based image;low dynamic range visual content;high dynamic range imaging;LDR content;HDR saliency detection models;current HDR image compression schemes;Human Visual System;HDR images;saliency detection model;ground truth saliency maps;high visual quality;HDR-VDP;visual saliency-induced index;perceptual image quality assessment metrics;visual saliency;Visual Difference Predictor;visual saliency detection models;Image coding;Visualization;Saliency detection;Dynamic range;Image segmentation;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553456},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439167.pdf},\n}\n\n
\n
\n\n\n
\n Recent years have seen the emergence of the visual saliency-based image and video compression for low dynamic range (LDR) visual content. The high dynamic range (HDR) imaging is yet to follow such an approach for compression as the state-of-the-art visual saliency detection models are mainly concerned with LDR content. Although a few HDR saliency detection models have been proposed in the recent years, they lack the comprehensive validation. Current HDR image compression schemes do not differentiate salient and non-salient regions, which has been proved redundant in terms of the Human Visual System. In this paper, we propose a novel visual saliency guided layered compression scheme for HDR images. The proposed saliency detection model is robust and highly correlates with the ground truth saliency maps obtained from eye tracker. The results show a reduction of bit-rates up to 50% while retaining the same high visual quality in terms of HDR-Visual Difference Predictor (HDR-VDP) and the visual saliency-induced index for perceptual image quality assessment (VSI) metrics in the salient regions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Model-Based Voice Activity Detection in Wireless Acoustic Sensor Networks.\n \n \n \n \n\n\n \n Zhao, Y.; Nielsen, J. K.; Christensen, M. G.; and Chen, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 425-429, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Model-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553457,\n  author = {Y. Zhao and J. K. Nielsen and M. G. Christensen and J. Chen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Model-Based Voice Activity Detection in Wireless Acoustic Sensor Networks},\n  year = {2018},\n  pages = {425-429},\n  abstract = {One of the major challenges in wireless acoustic sensor networks (WASN) based speech enhancement is robust and accurate voice activity detection (VAD). VAD is widely used in speech enhancement, speech coding, speech recognition, etc. In speech enhancement applications, VAD plays an important role, since noise statistics can be updated during non-speech frames to ensure efficient noise reduction and tolerable speech distortion. Although significant efforts have been made in single channel VAD, few solutions can be found in the multichannel case, especially in WASN. In this paper, we introduce a distributed VAD by using model-based noise power spectral density (PSD) estimation. For each node in the network, the speech PSD and noise PSD are first estimated, then a distributed detection is made by applying the generalized likelihood ratio test (GLRT). The proposed global GLRT based VAD has a quite general form. Indeed, we can judge whether the speech is present or absent by using the current time frame and frequency band observation or by taking into account the neighbouring frames and bands. Finally, the distributed GLRT result is obtained by using a distributed consensus method, such as random gossip, i.e., the whole detection system does not need any fusion center. With the model-based noise estimation method, the proposed distributed VAD performs robustly under non-stationary noise conditions, such as babble noise. As shown in experiments, the proposed method outperforms traditional multichannel VAD methods in terms of detection accuracy.},\n  keywords = {acoustic communication (telecommunication);speech coding;speech recognition;wireless sensor networks;neighbouring frames;distributed GLRT result;distributed consensus method;detection system;model-based noise estimation method;distributed VAD performs;nonstationary noise conditions;babble noise;traditional multichannel VAD methods;detection accuracy;model-based voice activity detection;wireless acoustic sensor networks based speech enhancement;WASN;accurate voice activity detection;speech coding;speech recognition;speech enhancement applications;noise statistics;nonspeech frames;tolerable speech distortion;single channel VAD;model-based noise power spectral density estimation;distributed detection;generalized likelihood ratio test;current time frame;frequency band observation;noise reduction;Estimation;Microphones;Speech enhancement;Voice activity detection;Time-frequency analysis;Data models;Wireless acoustic sensor networks;noise PSD estimation;distributed voice activity detection},\n  doi = {10.23919/EUSIPCO.2018.8553457},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437128.pdf},\n}\n\n
\n
\n\n\n
\n One of the major challenges in wireless acoustic sensor networks (WASN) based speech enhancement is robust and accurate voice activity detection (VAD). VAD is widely used in speech enhancement, speech coding, speech recognition, etc. In speech enhancement applications, VAD plays an important role, since noise statistics can be updated during non-speech frames to ensure efficient noise reduction and tolerable speech distortion. Although significant efforts have been made in single channel VAD, few solutions can be found in the multichannel case, especially in WASN. In this paper, we introduce a distributed VAD by using model-based noise power spectral density (PSD) estimation. For each node in the network, the speech PSD and noise PSD are first estimated, then a distributed detection is made by applying the generalized likelihood ratio test (GLRT). The proposed global GLRT based VAD has a quite general form. Indeed, we can judge whether the speech is present or absent by using the current time frame and frequency band observation or by taking into account the neighbouring frames and bands. Finally, the distributed GLRT result is obtained by using a distributed consensus method, such as random gossip, i.e., the whole detection system does not need any fusion center. With the model-based noise estimation method, the proposed distributed VAD performs robustly under non-stationary noise conditions, such as babble noise. As shown in experiments, the proposed method outperforms traditional multichannel VAD methods in terms of detection accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Simultaneous Estimation of a System Matrix by Compressed Sensing and Finding Optimal Regularization Parameters for the Inversion Problem.\n \n \n \n \n\n\n \n Maass, M.; Koch, P.; Katzberg, F.; and Mertins, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1950-1954, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SimultaneousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553458,\n  author = {M. Maass and P. Koch and F. Katzberg and A. Mertins},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Simultaneous Estimation of a System Matrix by Compressed Sensing and Finding Optimal Regularization Parameters for the Inversion Problem},\n  year = {2018},\n  pages = {1950-1954},\n  abstract = {This paper deals with the problem of measuring the system matrix of a linear system model with the help of test signals and using the estimated matrix within an inverse problem. In some cases, such as medical imaging, the process of measuring the system matrix can be very time and memory consuming. Fortunately, the underlying physical relationships often have a sparse representation, and in such situations, compressed-sensing techniques may be used to predict the system matrix. However, since there may be systematic errors inside the predicted matrix, its inversion can cause significant noise amplification and large errors on the reconstructed quantities. To combat this, regularization methods are often applied. In this paper, based on the singular value decomposition, the minimum mean square error estimator, and Stein's unbiased risk estimate, we show how optimal regularization parameters can be obtained from a few number of measurements. The efficiency of our approach is shown for two different systems.},\n  keywords = {compressed sensing;inverse problems;least mean squares methods;mean square error methods;singular value decomposition;system matrix;inversion problem;linear system model;estimated matrix;inverse problem;predicted matrix;minimum mean square error estimator;optimal regularization parameters;Stein's unbiased risk estimate;Sparse matrices;Estimation;Matrix decomposition;Image reconstruction;Mean square error methods;Voltage measurement;Signal processing;Inverse systems;minimum mean square error;compressed sensing;singular value decomposition;systematic error},\n  doi = {10.23919/EUSIPCO.2018.8553458},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437960.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with the problem of measuring the system matrix of a linear system model with the help of test signals and using the estimated matrix within an inverse problem. In some cases, such as medical imaging, the process of measuring the system matrix can be very time and memory consuming. Fortunately, the underlying physical relationships often have a sparse representation, and in such situations, compressed-sensing techniques may be used to predict the system matrix. However, since there may be systematic errors inside the predicted matrix, its inversion can cause significant noise amplification and large errors on the reconstructed quantities. To combat this, regularization methods are often applied. In this paper, based on the singular value decomposition, the minimum mean square error estimator, and Stein's unbiased risk estimate, we show how optimal regularization parameters can be obtained from a few number of measurements. The efficiency of our approach is shown for two different systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of Parametric and Non-Parametric Population Modelling of Sport Performances.\n \n \n \n \n\n\n \n Bermon, S.; Metelkina, A.; and Rendas, M. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 301-305, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ComparisonPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553459,\n  author = {S. Bermon and A. Metelkina and M. J. Rendas},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Comparison of Parametric and Non-Parametric Population Modelling of Sport Performances},\n  year = {2018},\n  pages = {301-305},\n  abstract = {This work compares the performance of parametric mixed-effects models to a completely non-parametric (NP) approach for modelling life-long evolution of competition performance for athletes. The difficulty of the problem lies in the strongly unbalanced characteristics of the functional dataset. The prediction performance of the identified models is compared, revealing the advantages and limitations of the two approaches. As far as we know this is the first time NP modelling of athletic performance is attempted, our study confirming its appropriateness whenever sufficiently rich datasets are available.},\n  keywords = {nonparametric statistics;sport;modelling life-long evolution;competition performance;functional dataset;prediction performance;identified models;time NP modelling;athletic performance;nonparametric population modelling;sport performances;parametric mixed-effects models;unbalanced characteristics;parametric population modelling;Sociology;Statistics;Predictive models;Computational modeling;Data models;Europe;Signal processing;Functional data;longitudinal population models;mixed-effects models;Gaussian Processes;Hierarchical Bayesian Gauss-Wishart models;athletic performance},\n  doi = {10.23919/EUSIPCO.2018.8553459},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437958.pdf},\n}\n\n
\n
\n\n\n
\n This work compares the performance of parametric mixed-effects models to a completely non-parametric (NP) approach for modelling life-long evolution of competition performance for athletes. The difficulty of the problem lies in the strongly unbalanced characteristics of the functional dataset. The prediction performance of the identified models is compared, revealing the advantages and limitations of the two approaches. As far as we know this is the first time NP modelling of athletic performance is attempted, our study confirming its appropriateness whenever sufficiently rich datasets are available.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Computationally Efficient Estimation of Multi-dimensional Damped Modes using Sparse Wideband Dictionaries*.\n \n \n \n\n\n \n Jälmby, M.; Swärd, J.; Elvander, F.; and Jakobsson, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1745-1749, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553460,\n  author = {M. Jälmby and J. Swärd and F. Elvander and A. Jakobsson},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Computationally Efficient Estimation of Multi-dimensional Damped Modes using Sparse Wideband Dictionaries*},\n  year = {2018},\n  pages = {1745-1749},\n  abstract = {Estimating the parameters of non-uniformly sampled multi-dimensional damped modes is computationally cumbersome, especially if the model order of the signal is not assumed to be known a priori. In this work, we examine the possibility of using the recently introduced wideband dictionary framework to formulate a computationally efficient estimator that iteratively refines the estimates of the candidate frequency and damping coefficients for each component. The proposed wideband dictionary allows for the use of a coarse initial grid without increasing the risk of not identifying closely spaced components, resulting in a substantial reduction in computational complexity. The performance of the proposed method is illustrated using both simulated and real spectroscopy data, clearly showing the improved performance as compared to previous techniques.},\n  keywords = {computational complexity;damping;gradient methods;iterative methods;parameter estimation;signal sampling;damping coefficients;computational complexity;computationally efficient estimation;multidimensional damped modes;sparse wideband dictionaries;nonuniformly sampled multidimensional;recently introduced wideband dictionary framework;computationally efficient estimator;candidate frequency;Dictionaries;Frequency estimation;Wideband;Damping;Hypercubes;Estimation;Computational complexity;Sparse signal analysis;dictionary learning;damped sinusoids;wideband dictionaries},\n  doi = {10.23919/EUSIPCO.2018.8553460},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Estimating the parameters of non-uniformly sampled multi-dimensional damped modes is computationally cumbersome, especially if the model order of the signal is not assumed to be known a priori. In this work, we examine the possibility of using the recently introduced wideband dictionary framework to formulate a computationally efficient estimator that iteratively refines the estimates of the candidate frequency and damping coefficients for each component. The proposed wideband dictionary allows for the use of a coarse initial grid without increasing the risk of not identifying closely spaced components, resulting in a substantial reduction in computational complexity. The performance of the proposed method is illustrated using both simulated and real spectroscopy data, clearly showing the improved performance as compared to previous techniques.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Deep Neural Networks for Joint Voice Activity Detection and Speaker Localization.\n \n \n \n \n\n\n \n Vecchiotti, P.; Principi, E.; Squartini, S.; and Piazza, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1567-1571, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DeepPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553461,\n  author = {P. Vecchiotti and E. Principi and S. Squartini and F. Piazza},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Deep Neural Networks for Joint Voice Activity Detection and Speaker Localization},\n  year = {2018},\n  pages = {1567-1571},\n  abstract = {Detecting the presence of speakers and suitably localize them in indoor environments undoubtedly represent two important tasks in the speech processing community. Several algorithms have been proposed for Voice Activity Detection (VAD) and Speaker LOCalization (SLOC) so far, while their accomplishment by means of a joint integrated model has not received much attention. In particular, no studies focused on cooperative exploitation of VAD and SLOC information by means of machine learning have been conducted, up to the authors' knowledge. That is why the authors propose in this work a data driven approach for joint speech detection and speaker localization, relying on Convolutional Neural Network (CNN) which simultaneously process LogMel and GCC-PHAT Patterns features. The proposed algorithm is compared with a two-stage model composed by the cascade of a neural network (NN) based VAD and an NN based SLOC, discussed in previous authors' contributions. Computer simulations, accomplished against the DIRHA dataset addressing a multi-room acoustic environment, show that the proposed method allows to achieve a remarkable relative reduction of speech activity detection error equal to 33% compared to the original NN based VAD. Moreover, the overall localization accuracy is improved as well, by employing the joint model as speech detector and the standard neural SLOC system in cascade.},\n  keywords = {feedforward neural nets;microphones;speaker recognition;speech enhancement;speech recognition;VAD;speech activity detection error;localization accuracy;joint model;standard neural SLOC system;deep neural networks;joint voice activity detection;speaker localization;indoor environments;joint integrated model;SLOC information;joint speech detection;Convolutional Neural Network;Microphones;Voice activity detection;Neural networks;Computational modeling;Task analysis;Feature extraction},\n  doi = {10.23919/EUSIPCO.2018.8553461},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437130.pdf},\n}\n\n
\n
\n\n\n
\n Detecting the presence of speakers and suitably localize them in indoor environments undoubtedly represent two important tasks in the speech processing community. Several algorithms have been proposed for Voice Activity Detection (VAD) and Speaker LOCalization (SLOC) so far, while their accomplishment by means of a joint integrated model has not received much attention. In particular, no studies focused on cooperative exploitation of VAD and SLOC information by means of machine learning have been conducted, up to the authors' knowledge. That is why the authors propose in this work a data driven approach for joint speech detection and speaker localization, relying on Convolutional Neural Network (CNN) which simultaneously process LogMel and GCC-PHAT Patterns features. The proposed algorithm is compared with a two-stage model composed by the cascade of a neural network (NN) based VAD and an NN based SLOC, discussed in previous authors' contributions. Computer simulations, accomplished against the DIRHA dataset addressing a multi-room acoustic environment, show that the proposed method allows to achieve a remarkable relative reduction of speech activity detection error equal to 33% compared to the original NN based VAD. Moreover, the overall localization accuracy is improved as well, by employing the joint model as speech detector and the standard neural SLOC system in cascade.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Free-Walking 3D Pedestrian Large Trajectory Reconstruction from IMU Sensors.\n \n \n \n \n\n\n \n Li, H.; Derrode, S.; Benyoussef, L.; and Pieczynski, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 657-661, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Free-WalkingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553462,\n  author = {H. Li and S. Derrode and L. Benyoussef and W. Pieczynski},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Free-Walking 3D Pedestrian Large Trajectory Reconstruction from IMU Sensors},\n  year = {2018},\n  pages = {657-661},\n  abstract = {This paper presents a pedestrian navigation algorithm based on a foot-mounted 9 Degree of Freedom (DOF) Inertial Measurement Unit (IMU), which provides tri-axial accelerations, angular rates and magnetics. Most algorithms used worldwide employ Zero Velocity Update (ZUPT) to reduce the tremendous error of integration from acceleration to displacement. The crucial part in ZUPT is to detect stance phase precisely. A cyclic left-to-right style Hidden Markov Model is introduced in this work which is able to appropriately model the periodic nature of signals. Stance detection is then made unsupervised by using a suited learning algorithm. Then orientation estimation is performed independently by a quaternion-based method, a simplified error-state Extended Kalman Filter (EKF) assists trajectory reconstruction in 3D space, neither extra method nor prior knowledge is needed to estimate the height. Experimental results on large free-walking trajectories show that the proposed algorithm can provide more accurate locations, especially in z-axis compared to competitive algorithms, w.r.t. to a ground-truth obtained using OpenStreetMap.},\n  keywords = {gait analysis;hidden Markov models;inertial navigation;Kalman filters;nonlinear filters;pedestrians;signal reconstruction;unsupervised learning;cyclic left-to-right style hidden Markov model;simplified error-state extended Kalman filter;foot-mounted 9 degree of freedom inertial measurement unit;zero velocity update;learning algorithm;ZUPT;angular rates;tri-axial accelerations;pedestrian navigation algorithm;IMU sensors;3D pedestrian;competitive algorithms;trajectories;trajectory reconstruction;quaternion-based method;orientation estimation;Sensors;Estimation;Magnetics;Earth;Quaternions;Signal processing algorithms;Acceleration;Pedestrian navigation;Inertial sensor;Hidden Markov model;Kalman filter;Stance detection},\n  doi = {10.23919/EUSIPCO.2018.8553462},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436847.pdf},\n}\n\n
\n
\n\n\n
\n This paper presents a pedestrian navigation algorithm based on a foot-mounted 9 Degree of Freedom (DOF) Inertial Measurement Unit (IMU), which provides tri-axial accelerations, angular rates and magnetics. Most algorithms used worldwide employ Zero Velocity Update (ZUPT) to reduce the tremendous error of integration from acceleration to displacement. The crucial part in ZUPT is to detect stance phase precisely. A cyclic left-to-right style Hidden Markov Model is introduced in this work which is able to appropriately model the periodic nature of signals. Stance detection is then made unsupervised by using a suited learning algorithm. Then orientation estimation is performed independently by a quaternion-based method, a simplified error-state Extended Kalman Filter (EKF) assists trajectory reconstruction in 3D space, neither extra method nor prior knowledge is needed to estimate the height. Experimental results on large free-walking trajectories show that the proposed algorithm can provide more accurate locations, especially in z-axis compared to competitive algorithms, w.r.t. to a ground-truth obtained using OpenStreetMap.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Inverse Scattering Algorithm by Incorporating RPM Method for Microwave Non-destructive Imaging.\n \n \n \n \n\n\n \n Takahashi, S.; and Kidera, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1222-1226, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553463,\n  author = {S. Takahashi and S. Kidera},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Inverse Scattering Algorithm by Incorporating RPM Method for Microwave Non-destructive Imaging},\n  year = {2018},\n  pages = {1222-1226},\n  abstract = {Microwave non-destructive testing (NDT) is promising for detection of an air cavity or a metallic corrosion buried into concrete road or tunnel. The distorted born iterative method (DBIM) is one of the most effective approaches to reconstruct dielectric map and recognize a high contrast object. However, in an actual NDT observation model, it must be assumed that only reflection field is available, which makes the ill-posed problem extremely difficult. To solve this problem, the range points migration (RPM) based object area extraction scheme is incorporated into the DBIM, where the area of region of interest (ROI) is considerably downsized to reduce the number of unknowns in the DBIM. The finite-difference time-domain (FDTD) based numerical simulation tests demonstrate that our proposed method effectively enhance the accuracy of dielectric map reconstruction for the sparsely distributed object model.},\n  keywords = {corrosion testing;electromagnetic wave scattering;finite difference time-domain analysis;image reconstruction;inverse problems;iterative methods;mechanical engineering computing;microwave imaging;nondestructive testing;DBIM;high contrast object;actual NDT observation model;reflection field;range points;numerical simulation tests;dielectric map reconstruction;sparsely distributed object model;RPM method;microwave nondestructive imaging;microwave nondestructive testing;air cavity;metallic corrosion;concrete road;distorted born iterative method;inverse scattering algorithm;RPM based object area extraction scheme;finite-difference time-domain based numerical simulation test;Dielectrics;Media;Permittivity;Image reconstruction;Conductivity;Inverse problems;Cavity resonators;Inverse scattering analysis;Non-destructive testing (NDT);Distorted born iterative method (DBIM);Range points migration(RPM)},\n  doi = {10.23919/EUSIPCO.2018.8553463},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436606.pdf},\n}\n\n
\n
\n\n\n
\n Microwave non-destructive testing (NDT) is promising for detection of an air cavity or a metallic corrosion buried into concrete road or tunnel. The distorted born iterative method (DBIM) is one of the most effective approaches to reconstruct dielectric map and recognize a high contrast object. However, in an actual NDT observation model, it must be assumed that only reflection field is available, which makes the ill-posed problem extremely difficult. To solve this problem, the range points migration (RPM) based object area extraction scheme is incorporated into the DBIM, where the area of region of interest (ROI) is considerably downsized to reduce the number of unknowns in the DBIM. The finite-difference time-domain (FDTD) based numerical simulation tests demonstrate that our proposed method effectively enhance the accuracy of dielectric map reconstruction for the sparsely distributed object model.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Intrinsic Light Field Decomposition and Disparity Estimation with Deep Encoder-Decoder Network.\n \n \n \n \n\n\n \n Alperovich, A.; Johannsen, O.; and Goldluecke, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2165-2169, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IntrinsicPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553464,\n  author = {A. Alperovich and O. Johannsen and B. Goldluecke},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Intrinsic Light Field Decomposition and Disparity Estimation with Deep Encoder-Decoder Network},\n  year = {2018},\n  pages = {2165-2169},\n  abstract = {We present an encoder-decoder deep neural network that solves non-Lambertian intrinsic light field decomposition, where we recover all three intrinsic components: albedo, shading, and specularity. We learn a sparse set of features from 3D epipolar volumes and use them in separate decoder pathways to reconstruct intrinsic light fields. While being trained on synthetic data generated with Blender, our model still generalizes to real world examples captured with a Lytro Illum plenoptic camera. The proposed method outperforms state-of-the-art approaches for single images and achieves competitive accuracy with recent modeling methods for light fields.},\n  keywords = {cameras;computer vision;image reconstruction;image representation;image sensors;learning (artificial intelligence);neural nets;deep encoder-decoder network;encoder-decoder deep neural network;solves nonLambertian intrinsic light field decomposition;intrinsic components;3D epipolar volumes;separate decoder pathways;intrinsic light fields;disparity estimation;Decoding;Estimation;Three-dimensional displays;Convolution;Two dimensional displays;Tensile stress;Cameras},\n  doi = {10.23919/EUSIPCO.2018.8553464},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438941.pdf},\n}\n\n
\n
\n\n\n
\n We present an encoder-decoder deep neural network that solves non-Lambertian intrinsic light field decomposition, where we recover all three intrinsic components: albedo, shading, and specularity. We learn a sparse set of features from 3D epipolar volumes and use them in separate decoder pathways to reconstruct intrinsic light fields. While being trained on synthetic data generated with Blender, our model still generalizes to real world examples captured with a Lytro Illum plenoptic camera. The proposed method outperforms state-of-the-art approaches for single images and achieves competitive accuracy with recent modeling methods for light fields.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improving Graph Convolutional Networks with Non-Parametric Activation Functions.\n \n \n \n \n\n\n \n Scardapane, S.; Van Vaerenbergh, S.; Comminiello, D.; and Uncini, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 872-876, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553465,\n  author = {S. Scardapane and S. {Van Vaerenbergh} and D. Comminiello and A. Uncini},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improving Graph Convolutional Networks with Non-Parametric Activation Functions},\n  year = {2018},\n  pages = {872-876},\n  abstract = {Graph neural networks (GNNs) are a class of neural networks that allow to efficiently perform inference on data that is associated to a graph structure, such as, e.g., citation networks or knowledge graphs. While several variants of GNNs have been proposed, they only consider simple nonlinear activation functions in their layers, such as rectifiers or squashing functions. In this paper, we investigate the use of graph convolutional networks (GCNs) when combined with more complex activation functions, able to adapt from the training data. More specifically, we extend the recently proposed kernel activation function, a non-parametric model which can be implemented easily, can be regularized with standard lp-norms techniques, and is smooth over its entire domain. Our experimental evaluation shows that the proposed architecture can significantly improve over its baseline, while similar improvements cannot be obtained by simply increasing the depth or size of the original GCN.},\n  keywords = {convolution;feedforward neural nets;graph theory;learning (artificial intelligence);nonparametric statistics;transfer functions;knowledge graphs;GNNs;rectifiers;squashing functions;graph convolutional networks;nonparametric activation functions;graph neural networks;graph structure;citation networks;nonlinear activation functions;kernel activation function;standard lp-norms techniques;GCN;Kernel;Convolution;Standards;Fourier transforms;Artificial neural networks},\n  doi = {10.23919/EUSIPCO.2018.8553465},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436745.pdf},\n}\n\n
\n
\n\n\n
\n Graph neural networks (GNNs) are a class of neural networks that allow to efficiently perform inference on data that is associated to a graph structure, such as, e.g., citation networks or knowledge graphs. While several variants of GNNs have been proposed, they only consider simple nonlinear activation functions in their layers, such as rectifiers or squashing functions. In this paper, we investigate the use of graph convolutional networks (GCNs) when combined with more complex activation functions, able to adapt from the training data. More specifically, we extend the recently proposed kernel activation function, a non-parametric model which can be implemented easily, can be regularized with standard lp-norms techniques, and is smooth over its entire domain. Our experimental evaluation shows that the proposed architecture can significantly improve over its baseline, while similar improvements cannot be obtained by simply increasing the depth or size of the original GCN.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n High-Order CPD Estimation with Dimensionality Reduction Using a Tensor Train Model.\n \n \n \n \n\n\n \n Zniyed, Y.; Boyer, R.; de Almeida , A. L. F.; and Favier, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2613-2617, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"High-OrderPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553466,\n  author = {Y. Zniyed and R. Boyer and A. L. F. {de Almeida} and G. Favier},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {High-Order CPD Estimation with Dimensionality Reduction Using a Tensor Train Model},\n  year = {2018},\n  pages = {2613-2617},\n  abstract = {The canonical polyadic decomposition (CPD) is one of the most popular tensor-based analysis tools due to its usefulness in numerous fields of application. The Q-order CPD is parametrized by Q matrices also called factors which have to be recovered. The factors estimation is usually carried out by means of the alternating least squares (ALS) algorithm. In the context of multi-modal big data analysis, i.e., large order (Q) and dimensions, the ALS algorithm has two main drawbacks. Firstly, its convergence is generally slow and may fail, in particular for large values of Q, and secondly it is highly time consuming. In this paper, it is proved that a Q-order CPD of rank-R is equivalent to a train of Q 3-order CPD(s) of rank-R. In other words, each tensor train (TT)-core admits a 3-order CPD of rank-R. Based on the structure of the TT-cores, a new dimensionality reduction and factor retrieval scheme is derived. The proposed method has a better robustness to noise with a smaller computational cost than the ALS algorithm.},\n  keywords = {Big Data;data analysis;data reduction;least squares approximations;matrix decomposition;tensors;multimodal Big Data analysis;tensor-based analysis tools;Q matrices;Q 3-order CPD;tensor train model;high-order CPD estimation;factor retrieval scheme;dimensionality reduction;tensor train-core;rank-R;ALS algorithm;alternating least squares algorithm;factors estimation;Q-order CPD;canonical polyadic decomposition;Tensile stress;Signal processing algorithms;Complexity theory;Estimation;Signal processing;Dimensionality reduction;Europe;Tensor decompositions;CP decomposition;Tensor train;Big data;Multidimensionnal signal processing;Parameter estimation;Fast algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553466},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570430320.pdf},\n}\n\n
\n
\n\n\n
\n The canonical polyadic decomposition (CPD) is one of the most popular tensor-based analysis tools due to its usefulness in numerous fields of application. The Q-order CPD is parametrized by Q matrices also called factors which have to be recovered. The factors estimation is usually carried out by means of the alternating least squares (ALS) algorithm. In the context of multi-modal big data analysis, i.e., large order (Q) and dimensions, the ALS algorithm has two main drawbacks. Firstly, its convergence is generally slow and may fail, in particular for large values of Q, and secondly it is highly time consuming. In this paper, it is proved that a Q-order CPD of rank-R is equivalent to a train of Q 3-order CPD(s) of rank-R. In other words, each tensor train (TT)-core admits a 3-order CPD of rank-R. Based on the structure of the TT-cores, a new dimensionality reduction and factor retrieval scheme is derived. The proposed method has a better robustness to noise with a smaller computational cost than the ALS algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Authentication of Galileo GNSS Signal by Superimposed Signature with Artificial Noise.\n \n \n \n \n\n\n \n Formaggio, F.; Tomasin, S.; Caparra, G.; Ceccato, S.; and Laurenti, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2573-2577, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AuthenticationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553467,\n  author = {F. Formaggio and S. Tomasin and G. Caparra and S. Ceccato and N. Laurenti},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Authentication of Galileo GNSS Signal by Superimposed Signature with Artificial Noise},\n  year = {2018},\n  pages = {2573-2577},\n  abstract = {Global navigation satellite systems (GNSS) are widely used in many civil applications to get information on position, velocity and timing (PVT). However, current systems (such as global positioning system (GPS) and Galileo) do not include any feature to authenticate the received signal, therefore leaving open the possibility from an attacker to spoof the GNSS signal and induce a wrong PVT computation at the receiver. In this paper we propose a solution based on the superposition of an authentication message (signature) and artificial noise (AN) on top of the existing navigation signal. Both the authentication message and AN are unpredictable and therefore can not be arbitrarily generated by an attacker. After transmission, through an external public authenticated but asynchronous (thus not useful for PVT) channel, both the authentication message and the AN are revealed, allowing the receiver to check if they were present along the previously received navigation signal. We consider the hypothesis testing problem at the legitimate receiver to decide the authenticity of the message, and we analyze its performance under two attacks: a generation attack in which the attacker does not generate the authentication signal and a replay attack in which a legitimate (including authentication message) signal is replayed by the attacker with a suitable delay in order to induce the desired PVT at the victim. The receiver operating characteristic (ROC) curve is obtained for the hypothesis testing problem under the two attacks.},\n  keywords = {cryptography;Global Positioning System;Global Navigation Satellite Systems;Global Positioning System;Galileo GNSS signal authentication;PVT computation;position velocity and timing computation;AN;receiver operating characteristic curve;authentication signal;legitimate receiver;hypothesis testing problem;received navigation signal;authentication message;artificial noise;superimposed signature;Authentication;Global navigation satellite system;Receivers;Testing;Satellite broadcasting;Global Navigation Satellite Systems (GNSS);Anti-Spoofing;Artificial Noise},\n  doi = {10.23919/EUSIPCO.2018.8553467},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439318.pdf},\n}\n\n
\n
\n\n\n
\n Global navigation satellite systems (GNSS) are widely used in many civil applications to get information on position, velocity and timing (PVT). However, current systems (such as global positioning system (GPS) and Galileo) do not include any feature to authenticate the received signal, therefore leaving open the possibility from an attacker to spoof the GNSS signal and induce a wrong PVT computation at the receiver. In this paper we propose a solution based on the superposition of an authentication message (signature) and artificial noise (AN) on top of the existing navigation signal. Both the authentication message and AN are unpredictable and therefore can not be arbitrarily generated by an attacker. After transmission, through an external public authenticated but asynchronous (thus not useful for PVT) channel, both the authentication message and the AN are revealed, allowing the receiver to check if they were present along the previously received navigation signal. We consider the hypothesis testing problem at the legitimate receiver to decide the authenticity of the message, and we analyze its performance under two attacks: a generation attack in which the attacker does not generate the authentication signal and a replay attack in which a legitimate (including authentication message) signal is replayed by the attacker with a suitable delay in order to induce the desired PVT at the victim. The receiver operating characteristic (ROC) curve is obtained for the hypothesis testing problem under the two attacks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Resolution Enhancement and Interference Suppression for Planetary Radar Sounders.\n \n \n \n \n\n\n \n Raguso, M. C.; Piazzo, L.; Mastrogiuseppe, M.; Seu, R.; and Orosei, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1212-1216, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ResolutionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553468,\n  author = {M. C. Raguso and L. Piazzo and M. Mastrogiuseppe and R. Seu and R. Orosei},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Resolution Enhancement and Interference Suppression for Planetary Radar Sounders},\n  year = {2018},\n  pages = {1212-1216},\n  abstract = {Orbital radar sounders are an effective tool to investigate the interior of planetary bodies. Typically, the sounding signal lies in the High Frequency (HF) or Very High Frequency (VHF) band, allowing a good ground penetration but a limited range resolution. Moreover, Electromagnetic Interference (EMI) may affect the system, increasing the noise in the radar products. In this paper, we propose methods to enhance the resolution and to suppress the EMI, exploiting a linear prediction based approach. Using simulated data, we investigate the methods' performance and the parameter settings. Finally, we apply the methods to data of the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS).},\n  keywords = {aerospace instrumentation;electromagnetic interference;ground penetrating radar;interference suppression;Mars;planetary atmospheres;planetary remote sensing;planetary surfaces;remote sensing by radar;space vehicles;resolution enhancement;orbital radar sounders;planetary bodies;sounding signal;good ground penetration;EMI;radar products;linear prediction based approach;interference suppression;planetary radar sounders;electromagnetic interference;MARSIS;Mars Advanced Radar for Subsurface and Ionosphere Sounding;Electromagnetic interference;Signal resolution;Bandwidth;Planetary orbits;Spaceborne radar},\n  doi = {10.23919/EUSIPCO.2018.8553468},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438074.pdf},\n}\n\n
\n
\n\n\n
\n Orbital radar sounders are an effective tool to investigate the interior of planetary bodies. Typically, the sounding signal lies in the High Frequency (HF) or Very High Frequency (VHF) band, allowing a good ground penetration but a limited range resolution. Moreover, Electromagnetic Interference (EMI) may affect the system, increasing the noise in the radar products. In this paper, we propose methods to enhance the resolution and to suppress the EMI, exploiting a linear prediction based approach. Using simulated data, we investigate the methods' performance and the parameter settings. Finally, we apply the methods to data of the Mars Advanced Radar for Subsurface and Ionosphere Sounding (MARSIS).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Low-Rate Farrow Structure with Discrete-Lowpass and Polynomial Support for Audio Resampling.\n \n \n \n \n\n\n \n Chinaev, A.; Thüne, P.; and Enzner, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 475-479, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Low-RatePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553469,\n  author = {A. Chinaev and P. Thüne and G. Enzner},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Low-Rate Farrow Structure with Discrete-Lowpass and Polynomial Support for Audio Resampling},\n  year = {2018},\n  pages = {475-479},\n  abstract = {Arbitrary sampling rate conversion (ASRC) of audio signals currently receives a lot of new attention due to its potential for aligning autonomous recording clients in ad-hoc acoustic sensor networks. State-of-the-art for digital-to-digital ASRC has been outlined in terms of a two-stage architecture comprising a) synchronous lowpass interpolation by an integer factor and b) subsequent asynchronous polynomial interpolation. While this composite ASRC achieves high resampling accuracy, its mere disadvantage is the intermediate oversampling to high rate. In our paper we thus fuse the high-rate discrete-time lowpass interpolation with a polynomial Farrow filter into a monolithic FIR filter form. We then show that decimation of the output rate effectively yields a polyphase set of Farrow filters with quasi-fixed coefficients. Simulations with broadband multitone signals confirm that the proposed low-rate monolithic ASRC achieves the same performance as the conventional composite resampling in terms of signal-to-interpolation-noise ratio. The main practical benefit of quasi-fixed coefficients of the system stands out when resampling by a small factor is desired, i.e., when the input rate almost matches the output rate - a scenario to be encountered in acoustic sensor networks.},\n  keywords = {audio signal processing;FIR filters;interpolation;low-pass filters;polynomials;signal sampling;audio resampling;arbitrary sampling rate conversion;audio signals;ad-hoc acoustic sensor networks;digital-to-digital ASRC;two-stage architecture comprising;synchronous lowpass interpolation;integer factor;composite ASRC;high-rate discrete-time lowpass interpolation;polynomial Farrow filter;monolithic FIR filter form;quasifixed coefficients;broadband multitone signals;low-rate monolithic ASRC;signal-to-interpolation-noise ratio;low-rate Farrow structure;polynomial support;autonomous recording clients;composite resampling;subsequent asynchronous polynomial interpolation;Interpolation;Delays;Signal processing;Ad hoc networks;Acoustic sensors;Indexes;Switches;Asynchronous sampling rate conversion;sampling and interpolation;synchronization of ad-hoc acoustic sensor networks},\n  doi = {10.23919/EUSIPCO.2018.8553469},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438246.pdf},\n}\n\n
\n
\n\n\n
\n Arbitrary sampling rate conversion (ASRC) of audio signals currently receives a lot of new attention due to its potential for aligning autonomous recording clients in ad-hoc acoustic sensor networks. State-of-the-art for digital-to-digital ASRC has been outlined in terms of a two-stage architecture comprising a) synchronous lowpass interpolation by an integer factor and b) subsequent asynchronous polynomial interpolation. While this composite ASRC achieves high resampling accuracy, its mere disadvantage is the intermediate oversampling to high rate. In our paper we thus fuse the high-rate discrete-time lowpass interpolation with a polynomial Farrow filter into a monolithic FIR filter form. We then show that decimation of the output rate effectively yields a polyphase set of Farrow filters with quasi-fixed coefficients. Simulations with broadband multitone signals confirm that the proposed low-rate monolithic ASRC achieves the same performance as the conventional composite resampling in terms of signal-to-interpolation-noise ratio. The main practical benefit of quasi-fixed coefficients of the system stands out when resampling by a small factor is desired, i.e., when the input rate almost matches the output rate - a scenario to be encountered in acoustic sensor networks.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sampling a Noisy Multiple Output Channel to Maximize the Capacity.\n \n \n \n \n\n\n \n Solodky, G.; and Feder, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 445-449, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SamplingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553470,\n  author = {G. Solodky and M. Feder},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sampling a Noisy Multiple Output Channel to Maximize the Capacity},\n  year = {2018},\n  pages = {445-449},\n  abstract = {This paper deals with an extension of Papoulis' generalized sampling expansion (GSE) to a case where noise is added before sampling and the total sampling rate may be higher than the Nyquist rate. We look for the best sampling scheme that maximizes the capacity of the sampled channel between the input signal and the M sampled outputs signals, where the channels are composed of all-pass linear time-invariant (LTI) systems with additive Gaussian white noise. For the case where the total rate is between M-1 and M times the Nyquist rate, the optimal scheme samples M-1 outputs at Nyquist rate and the last output at the remaining rate. When M = 2 the optimal performance can also be attained by an equally sampled scheme under some condition on the LTI systems. Surprisingly, equal sampling is suboptimal in general. Nevertheless, for some total sampling rates where there is an integer relation between the number of channels and the total rate, a uniform sampling achieves the optimal performance. Finally, we discuss the relation between maximizing the capacity and minimizing the mean-square error.},\n  keywords = {Gaussian noise;mean square error methods;optimisation;signal reconstruction;signal sampling;white noise;noisy multiple output channel;Papoulis' generalized sampling expansion;total sampling rate;Nyquist rate;outputs signals;all-pass linear time-invariant systems;additive Gaussian white noise;optimal scheme samples M-1 outputs;optimal performance;equally sampled scheme;equal sampling;uniform sampling;Mathematical model;Linear systems;Eigenvalues and eigenfunctions;Signal to noise ratio;Europe;Frequency-domain analysis},\n  doi = {10.23919/EUSIPCO.2018.8553470},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433468.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with an extension of Papoulis' generalized sampling expansion (GSE) to a case where noise is added before sampling and the total sampling rate may be higher than the Nyquist rate. We look for the best sampling scheme that maximizes the capacity of the sampled channel between the input signal and the M sampled outputs signals, where the channels are composed of all-pass linear time-invariant (LTI) systems with additive Gaussian white noise. For the case where the total rate is between M-1 and M times the Nyquist rate, the optimal scheme samples M-1 outputs at Nyquist rate and the last output at the remaining rate. When M = 2 the optimal performance can also be attained by an equally sampled scheme under some condition on the LTI systems. Surprisingly, equal sampling is suboptimal in general. Nevertheless, for some total sampling rates where there is an integer relation between the number of channels and the total rate, a uniform sampling achieves the optimal performance. Finally, we discuss the relation between maximizing the capacity and minimizing the mean-square error.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 3D Localization of Multiple Simultaneous Speakers with Discrete Wavelet Transform and Proposed 3D Nested Microphone Array.\n \n \n \n \n\n\n \n Dehghan Firoozabadi, A.; Durney, H.; Soto, I.; and Olave, M. S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 356-360, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"3DPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553471,\n  author = {A. {Dehghan Firoozabadi} and H. Durney and I. Soto and M. S. Olave},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {3D Localization of Multiple Simultaneous Speakers with Discrete Wavelet Transform and Proposed 3D Nested Microphone Array},\n  year = {2018},\n  pages = {356-360},\n  abstract = {Multiple sound source localization is one of the important topic in speech processing. GCC function is used as a traditional algorithm for sound source localization. This function estimates DOA for multiple speakers by calculation the cross-correlation between microphone signals but its accuracy decreases in adverse conditions. The aim of proposed method in this paper is localization of multiple simultaneous speakers in undesirable condition. The proposed method is based on novel 3D nested microphone array in combination with obtained information of Discrete Wavelet Transform (DWT) and subband processing. The proposed 3D nested microphone array prepares the condition for 3D localization and eliminates the spatial aliasing between microphone signals. Also, we propose the DWT for extraction the information of speech signal. Since, the spectral information of speech signal concentrates on low frequencies, we propose a structure of filter bank based on DWT to increase the frequency resolution on low frequencies. The performed evaluation on real and simulated data shows the superiority of our proposed method in comparison with Fullband and subband processing with uniform filters and uniform microphone array.},\n  keywords = {acoustic radiators;acoustic signal processing;array signal processing;channel bank filters;direction-of-arrival estimation;discrete wavelet transforms;microphone arrays;speech processing;multiple sound source localization;speech processing;GCC function;microphone signals;adverse conditions;multiple simultaneous speakers;Discrete Wavelet Transform;DWT;uniform microphone array;speech signal;3D nested microphone array;subband processing;frequency resolution;fullband processing;uniform filters;cross-correlation;Microphone arrays;Three-dimensional displays;Array signal processing;Discrete wavelet transforms;Simultaneous sound source localization;Wavelet Transform;Generalized Cross-Correlation;Nested microphone array;Subband processing},\n  doi = {10.23919/EUSIPCO.2018.8553471},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433328.pdf},\n}\n\n
\n
\n\n\n
\n Multiple sound source localization is one of the important topic in speech processing. GCC function is used as a traditional algorithm for sound source localization. This function estimates DOA for multiple speakers by calculation the cross-correlation between microphone signals but its accuracy decreases in adverse conditions. The aim of proposed method in this paper is localization of multiple simultaneous speakers in undesirable condition. The proposed method is based on novel 3D nested microphone array in combination with obtained information of Discrete Wavelet Transform (DWT) and subband processing. The proposed 3D nested microphone array prepares the condition for 3D localization and eliminates the spatial aliasing between microphone signals. Also, we propose the DWT for extraction the information of speech signal. Since, the spectral information of speech signal concentrates on low frequencies, we propose a structure of filter bank based on DWT to increase the frequency resolution on low frequencies. The performed evaluation on real and simulated data shows the superiority of our proposed method in comparison with Fullband and subband processing with uniform filters and uniform microphone array.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evaluation of Binaural Noise Reduction Methods in Terms of Intelligibility and Perceived Localization.\n \n \n \n \n\n\n \n Koutrouvelis, A. I.; Hendriks, R. C.; Heusdens, R.; Jensen, J.; and Guo, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2429-2433, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EvaluationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553472,\n  author = {A. I. Koutrouvelis and R. C. Hendriks and R. Heusdens and J. Jensen and M. Guo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Evaluation of Binaural Noise Reduction Methods in Terms of Intelligibility and Perceived Localization},\n  year = {2018},\n  pages = {2429-2433},\n  abstract = {In this paper, we perceptually evaluate two recently proposed binaural multi-microphone speech enhancement methods in terms of intelligibility improvement and binaural-cue preservation. We compare these two methods with the well-known binaural minimum variance distortionless response (BMVDR) method. More specifically, we measure the 50% speech reception threshold, and the localization error of all dominant point sources in three different acoustic scenes. The listening tests are divided into a parameter selection phase and a testing phase. The parameter selection phase is used to select the algorithms' parameters based on one acoustic scene. In the testing phase, the two methods are evaluated in two other acoustic scenes in order to examine their robustness. Both methods achieve significantly better intelligiblity compared to the unprocessed scene, and slightly worse intelligibility than the BMVDR method. However, unlike the BMVDR method which severely distorts the binaural cues of all interferers, the new methods achieve localization errors which are not significantly different compared to those of the unprocessed scene.},\n  keywords = {acoustic signal processing;handicapped aids;hearing aids;interference (signal);microphones;signal denoising;speech enhancement;speech intelligibility;speech reception threshold;binaural cues;BMVDR method;acoustic scene;parameter selection phase;dominant point sources;localization error;binaural minimum variance distortionless response method;multimicrophone speech enhancement methods;binaural noise reduction methods;Testing;Microphones;Time-frequency analysis;Noise reduction;Signal to noise ratio;Speech enhancement;Binaural beamforming;binaural cues;intelligibility;localization},\n  doi = {10.23919/EUSIPCO.2018.8553472},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432812.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we perceptually evaluate two recently proposed binaural multi-microphone speech enhancement methods in terms of intelligibility improvement and binaural-cue preservation. We compare these two methods with the well-known binaural minimum variance distortionless response (BMVDR) method. More specifically, we measure the 50% speech reception threshold, and the localization error of all dominant point sources in three different acoustic scenes. The listening tests are divided into a parameter selection phase and a testing phase. The parameter selection phase is used to select the algorithms' parameters based on one acoustic scene. In the testing phase, the two methods are evaluated in two other acoustic scenes in order to examine their robustness. Both methods achieve significantly better intelligiblity compared to the unprocessed scene, and slightly worse intelligibility than the BMVDR method. However, unlike the BMVDR method which severely distorts the binaural cues of all interferers, the new methods achieve localization errors which are not significantly different compared to those of the unprocessed scene.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Online Recovery of Time- varying Signals Defined over Dynamic Graphs.\n \n \n \n\n\n \n Di Lorenzo, P.; and Ceci, E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 131-135, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553473,\n  author = {P. {Di Lorenzo} and E. Ceci},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Online Recovery of Time- varying Signals Defined over Dynamic Graphs},\n  year = {2018},\n  pages = {131-135},\n  abstract = {The goal of this work is to devise least mean square (LMS) strategies for online recovery of time-varying signals defined over dynamic graphs, which are observed over a (randomly) time-varying subset of vertices. We also derive a mean-square analysis illustrating the effect of graph variations and sampling on the reconstruction performance. Finally, an optimization strategy is developed in order to design the sampling probability at each node in the graph, with the aim of finding the best tradeoff between steady-state performance, graph sampling rate, and learning rate of the proposed method. Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed learning strategies.},\n  keywords = {graph theory;learning (artificial intelligence);least mean squares methods;network theory (graphs);optimisation;probability;sampling methods;signal reconstruction;online recovery;dynamic graphs;time-varying signals;mean-square analysis;graph variations;reconstruction performance;optimization strategy;sampling probability;steady-state performance;graph sampling rate;learning strategies;least mean square strategies;Perturbation methods;Heuristic algorithms;Laplace equations;Signal processing;Indexes;Eigenvalues and eigenfunctions;Matrix decomposition;Graph signal processing;online learning;sampling on graphs;time-varying graphs},\n  doi = {10.23919/EUSIPCO.2018.8553473},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n The goal of this work is to devise least mean square (LMS) strategies for online recovery of time-varying signals defined over dynamic graphs, which are observed over a (randomly) time-varying subset of vertices. We also derive a mean-square analysis illustrating the effect of graph variations and sampling on the reconstruction performance. Finally, an optimization strategy is developed in order to design the sampling probability at each node in the graph, with the aim of finding the best tradeoff between steady-state performance, graph sampling rate, and learning rate of the proposed method. Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed learning strategies.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Online Prediction of Robot to Human Handover Events Using Vibrations.\n \n \n \n \n\n\n \n Singh, H.; Controzzi, M.; Cipriani, C.; Di Caterina, G.; Petropoulakis, L.; and Soraghan, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 687-691, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnlinePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553474,\n  author = {H. Singh and M. Controzzi and C. Cipriani and G. {Di Caterina} and L. Petropoulakis and J. Soraghan},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Online Prediction of Robot to Human Handover Events Using Vibrations},\n  year = {2018},\n  pages = {687-691},\n  abstract = {One of the main issues for a robotic passer is to detect the onset of a handover, in order to avoid the object from being released when the human partner is not ready or if some impact occurs. This paper presents the methodology for a robotic passer, that is potentially able to estimate the interaction forces by the receiver on the object, thus to achieve fluent and safe handovers. The proposed system uses a vibrator that energizes the object and an accelerometer that monitors vibration propagation through the object during the handover. We focused on the machine-learning technique to classify between four states during object handover. A neural network was trained for these four states and tested online. In experimental trials an accuracy of 85.2 % and 93.9% were obtained respectively for four classes and two classes of actions by a neural network classifier.},\n  keywords = {accelerometers;human-robot interaction;learning (artificial intelligence);neural nets;robot dynamics;vibrations;robotic passer;vibration propagation;object handover;human handover events;vibrations;machine-learning;online prediction;robot handover events;neural network;Autonomous;Hando ver;Machine Learning;Neural networks},\n  doi = {10.23919/EUSIPCO.2018.8553474},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436057.pdf},\n}\n\n
\n
\n\n\n
\n One of the main issues for a robotic passer is to detect the onset of a handover, in order to avoid the object from being released when the human partner is not ready or if some impact occurs. This paper presents the methodology for a robotic passer, that is potentially able to estimate the interaction forces by the receiver on the object, thus to achieve fluent and safe handovers. The proposed system uses a vibrator that energizes the object and an accelerometer that monitors vibration propagation through the object during the handover. We focused on the machine-learning technique to classify between four states during object handover. A neural network was trained for these four states and tested online. In experimental trials an accuracy of 85.2 % and 93.9% were obtained respectively for four classes and two classes of actions by a neural network classifier.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Prediction Methods for Time Evolving Dyadic Processes.\n \n \n \n \n\n\n \n Ntemi, M.; and Kotropoulos, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2588-2592, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PredictionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553475,\n  author = {M. Ntemi and C. Kotropoulos},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Prediction Methods for Time Evolving Dyadic Processes},\n  year = {2018},\n  pages = {2588-2592},\n  abstract = {Stock prices evolve dynamically through time. Capturing their changes is crucial in order to make accurate predictions. In addition, it is well-known that the probability density function of stock prices exhibits heavy tails and there is a large degree of uncertainty in stock price evolution. Building on the aforementioned facts, a robust collaborative Kalman filter is proposed for stock price prediction within the context of time-evolving dyadic processes, where the prediction error is treated as a heavy-tailed noise whose variance is a properly modeled random variable. Variational approximation is exploited to make posterior inference tractable. The proposed model captures the volatility of stock prices through time, yielding more accurate predictions than the state-of-the-art and enabling the consistent tracking of the extreme values of stock prices.},\n  keywords = {approximation theory;financial data processing;Kalman filters;learning (artificial intelligence);matrix decomposition;pricing;probability;stock markets;stock price evolution;stock price prediction;time-evolving dyadic processes;prediction error;prediction methods;heavy-tailed noise;variance;stock price volatility;robust collaborative Kalman filter;Covariance matrices;Gold;Collaboration;Kalman filters;Random variables;Dynamics;Predictive models},\n  doi = {10.23919/EUSIPCO.2018.8553475},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437095.pdf},\n}\n\n
\n
\n\n\n
\n Stock prices evolve dynamically through time. Capturing their changes is crucial in order to make accurate predictions. In addition, it is well-known that the probability density function of stock prices exhibits heavy tails and there is a large degree of uncertainty in stock price evolution. Building on the aforementioned facts, a robust collaborative Kalman filter is proposed for stock price prediction within the context of time-evolving dyadic processes, where the prediction error is treated as a heavy-tailed noise whose variance is a properly modeled random variable. Variational approximation is exploited to make posterior inference tractable. The proposed model captures the volatility of stock prices through time, yielding more accurate predictions than the state-of-the-art and enabling the consistent tracking of the extreme values of stock prices.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis vs Synthesis-based Regularization for Combined Compressed Sensing and Parallel MRI Reconstruction at 7 Tesla.\n \n \n \n \n\n\n \n Cherkaoui, H.; Gueddari, L. E.; Lazarus, C.; Grigis, A.; Poupon, F.; Vignaud, A.; Farrens, S.; Starck, J. -.; and Ciuciu, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 36-40, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553476,\n  author = {H. Cherkaoui and L. E. Gueddari and C. Lazarus and A. Grigis and F. Poupon and A. Vignaud and S. Farrens and J. -. Starck and P. Ciuciu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis vs Synthesis-based Regularization for Combined Compressed Sensing and Parallel MRI Reconstruction at 7 Tesla},\n  year = {2018},\n  pages = {36-40},\n  abstract = {Compressed Sensing (CS) has allowed a significant reduction of acquisition times in MRI, especially in the high spatial resolution (e.g., 400 μm) context. Nonlinear CS reconstruction usually relies on analysis (e.g., Total Variation) or synthesis (e.g., wavelet) based priors and ℓ1 regularization to promote sparsity in the transform domain. Here, we compare the performance of several orthogonal wavelet transforms with those of tight frames for MR image reconstruction in the CS setting combined with parallel imaging (multiple receiver coil). We show that overcomplete dictionaries such as the fast curvelet transform provide improved image quality as compared to orthogonal transforms. For doing so, we rely on an analysis-based formulation where the underlying ℓ1 regularized criterion is minimized using a primal dual splitting method (e.g., Condat-Vũ algorithm). Validation is performed on ex-vivo baboon brain T2* MRI data collected at 7 Tesla and restrospectively under-sampled using non-Cartesian schemes (radial and Sparkling). We show that multiscale analysis priors based on tight frames instead of orthogonal transforms achieve better image quality (pSNR, SSIM) in particular at low signal-to-noise ratio.},\n  keywords = {biomedical MRI;compressed sensing;curvelet transforms;image reconstruction;image resolution;image sampling;medical image processing;wavelet transforms;spatial resolution;image quality;fast curvelet transform;ℓ1 regularized criterion;ex-vivo baboon brain T2* MRI data;nonCartesian scheme;signal-to-noise ratio;multiscale analysis priors;MRI data;Condat-Vũ algorithm;primal dual splitting method;analysis-based formulation;overcomplete dictionaries;multiple receiver coil;parallel imaging;CS setting;MR image reconstruction;tight frames;orthogonal wavelet transforms;transform domain;Total Variation;nonlinear CS reconstruction;acquisition times;parallel MRI reconstruction;combined compressed Sensing;synthesis-based regularization;magnetic flux density 7 tesla;Magnetic resonance imaging;Image reconstruction;Image quality;Wavelet transforms;Signal processing algorithms;Signal to noise ratio},\n  doi = {10.23919/EUSIPCO.2018.8553476},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435807.pdf},\n}\n\n
\n
\n\n\n
\n Compressed Sensing (CS) has allowed a significant reduction of acquisition times in MRI, especially in the high spatial resolution (e.g., 400 μm) context. Nonlinear CS reconstruction usually relies on analysis (e.g., Total Variation) or synthesis (e.g., wavelet) based priors and ℓ1 regularization to promote sparsity in the transform domain. Here, we compare the performance of several orthogonal wavelet transforms with those of tight frames for MR image reconstruction in the CS setting combined with parallel imaging (multiple receiver coil). We show that overcomplete dictionaries such as the fast curvelet transform provide improved image quality as compared to orthogonal transforms. For doing so, we rely on an analysis-based formulation where the underlying ℓ1 regularized criterion is minimized using a primal dual splitting method (e.g., Condat-Vũ algorithm). Validation is performed on ex-vivo baboon brain T2* MRI data collected at 7 Tesla and restrospectively under-sampled using non-Cartesian schemes (radial and Sparkling). We show that multiscale analysis priors based on tight frames instead of orthogonal transforms achieve better image quality (pSNR, SSIM) in particular at low signal-to-noise ratio.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Proximal Method for Convolutional Dictionary Learning with Convergence Property.\n \n \n \n \n\n\n \n Peng, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1715-1719, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553477,\n  author = {G. Peng},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Proximal Method for Convolutional Dictionary Learning with Convergence Property},\n  year = {2018},\n  pages = {1715-1719},\n  abstract = {The convolutional sparse coding (CSC) is superior in representing signals, and to obtain the best performance of CSC, the dictionary is usually learned from data. The so-called convolution dictionary learning (CDL) problem is therefore formulated for the purpose. Most of the solvers for CDL alternately update the coefficients and dictionary in an iterative manner, and as a consequence, numerous redundant iterations incur slow speed in achieving convergence. Moreover, their convergence properties can hardly be analyzed even though the ℓ0 sparse inducing function is approximated by the convex ℓ0 norm. In this article, we propose an algorithm which directly deals with the non-convex non-smooth ℓ0 constraint and provides a sound convergence property. The experimental results show that, the proposed method achieves the convergence point with less time and a smaller final function value compared to the existing convolutional dictionary learning algorithms.},\n  keywords = {convex programming;convolution;iterative methods;learning (artificial intelligence);signal representation;smoothing methods;existing convolutional dictionary learning algorithms;convergence point;sound convergence property;nonconvex nonsmooth ℓ;numerous redundant iterations;iterative manner;CDL;convolution dictionary learning problem;CSC;convolutional sparse coding;Dictionaries;Convergence;Convolution;Signal processing algorithms;Convolutional codes;Approximation algorithms;Mathematical model},\n  doi = {10.23919/EUSIPCO.2018.8553477},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437050.pdf},\n}\n\n
\n
\n\n\n
\n The convolutional sparse coding (CSC) is superior in representing signals, and to obtain the best performance of CSC, the dictionary is usually learned from data. The so-called convolution dictionary learning (CDL) problem is therefore formulated for the purpose. Most of the solvers for CDL alternately update the coefficients and dictionary in an iterative manner, and as a consequence, numerous redundant iterations incur slow speed in achieving convergence. Moreover, their convergence properties can hardly be analyzed even though the ℓ0 sparse inducing function is approximated by the convex ℓ0 norm. In this article, we propose an algorithm which directly deals with the non-convex non-smooth ℓ0 constraint and provides a sound convergence property. The experimental results show that, the proposed method achieves the convergence point with less time and a smaller final function value compared to the existing convolutional dictionary learning algorithms.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Evolutionary Resampling for Multi-Target Tracking using Probability Hypothesis Density Filter.\n \n \n \n \n\n\n \n Halimeh, M. M.; Brendel, A.; and Kellermann, W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 642-646, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EvolutionaryPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553478,\n  author = {M. M. Halimeh and A. Brendel and W. Kellermann},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Evolutionary Resampling for Multi-Target Tracking using Probability Hypothesis Density Filter},\n  year = {2018},\n  pages = {642-646},\n  abstract = {A resampling scheme is proposed for use with Sequential Monte Carlo (SMC)-based Probability Hypothesis Density (PHD) filters. It consists of two steps, first, regions of interest are identified, then an evolutionary resampling is applied for each region. Applying resampling locally corresponds to treating each target individually, while the evolutionary resampling introduces a memory to a group of particles, increasing the robustness of the estimation against noise outliers. The proposed approach is compared to the original SMC-PHD filter for tracking multiple targets in a deterministically moving targets scenario, and a noisy motion scenario. In both cases, the proposed approach provides more accurate estimates.},\n  keywords = {evolutionary computation;filtering theory;Monte Carlo methods;probability;target tracking;evolutionary resampling;multitarget tracking;Probability Hypothesis Density filter;resampling scheme;sequential Monte Carlo;SMC-PHD filter;moving target scenario;Radio frequency;Target tracking;Bayes methods;Noise measurement;Signal processing;Density measurement;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553478},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437926.pdf},\n}\n\n
\n
\n\n\n
\n A resampling scheme is proposed for use with Sequential Monte Carlo (SMC)-based Probability Hypothesis Density (PHD) filters. It consists of two steps, first, regions of interest are identified, then an evolutionary resampling is applied for each region. Applying resampling locally corresponds to treating each target individually, while the evolutionary resampling introduces a memory to a group of particles, increasing the robustness of the estimation against noise outliers. The proposed approach is compared to the original SMC-PHD filter for tracking multiple targets in a deterministically moving targets scenario, and a noisy motion scenario. In both cases, the proposed approach provides more accurate estimates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Anomaly Detection Based on Feature Reconstruction from Subsampled Audio Signals.\n \n \n \n \n\n\n \n Kawaguchi, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2524-2528, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnomalyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553480,\n  author = {Y. Kawaguchi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Anomaly Detection Based on Feature Reconstruction from Subsampled Audio Signals},\n  year = {2018},\n  pages = {2524-2528},\n  abstract = {We aim to reduce the cost of sound monitoring for maintain machinery by reducing the sampling rate, i.e., sub-Nyquist sampling. Monitoring based on sub-Nyquist sampling requires two sub-systems: a sub-system on-site for sampling machinery sounds at a low rate and a sub-system off-site for detecting anomalies from the subsampled signal. This paper proposes a feature reconstruction method for enabling anomaly detection from the subsampled signal. The method applies a long short-term memory-(LSTM)-based network for reconstructing features. The novelty of the proposed network is that it receives the subsampled time-domain signal as input directly and reconstructs the feature vector of the original signal. Experimental results indicate that our method is suitable for anomaly detection from the subsampled signal.},\n  keywords = {audio signal processing;feature extraction;filtering theory;signal reconstruction;signal sampling;long short-term memory-based network;feature reconstruction method;machinery sounds;sub-Nyquist sampling;sampling rate;sound monitoring;subsampled audio signals;feature vector;subsampled time-domain signal;subsampled signal;anomaly detection;Anomaly detection;Time-domain analysis;Feature extraction;Monitoring;Training;Machinery;Hidden Markov models;machine condition monitoring;sub-Nyquist sampling;neural network;long short-term memory (LSTM)},\n  doi = {10.23919/EUSIPCO.2018.8553480},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439245.pdf},\n}\n\n
\n
\n\n\n
\n We aim to reduce the cost of sound monitoring for maintain machinery by reducing the sampling rate, i.e., sub-Nyquist sampling. Monitoring based on sub-Nyquist sampling requires two sub-systems: a sub-system on-site for sampling machinery sounds at a low rate and a sub-system off-site for detecting anomalies from the subsampled signal. This paper proposes a feature reconstruction method for enabling anomaly detection from the subsampled signal. The method applies a long short-term memory-(LSTM)-based network for reconstructing features. The novelty of the proposed network is that it receives the subsampled time-domain signal as input directly and reconstructs the feature vector of the original signal. Experimental results indicate that our method is suitable for anomaly detection from the subsampled signal.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Plug and Play Bayesian Algorithm for Solving Myope Inverse Problems.\n \n \n \n \n\n\n \n Chaari, L.; Tourneret, J.; and Batatia, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 737-741, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553481,\n  author = {L. Chaari and J. Tourneret and H. Batatia},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Plug and Play Bayesian Algorithm for Solving Myope Inverse Problems},\n  year = {2018},\n  pages = {737-741},\n  abstract = {The emergence of efficient algorithms in variational and Bayesian frameworks braught significant advances to the field of inverse problems. However, such problems remain challenging when the observation operator is not perfectly known. In this paper we propose a Bayesian Plug-and-Play (PP) algorithm for solving a wide range of inverse problems where the signal/image is sparse in the original domain and the observation operator has to be estimated. The principle consists of plugging the prior considered for the target observation operator and keep using the same algorithm. The proposed method relies on a generic proximal non-smooth sampling scheme. This genericity makes the proposed algorithm novel in the sense that it can be used to solve a wide range or inverse problems. Our method is illustrated on a deblurring problem with unknown blur operator where promising results are obtained.},\n  keywords = {Bayes methods;image restoration;inverse problems;sampling methods;myope inverse problems;deblurring problem;generic proximal non-smooth sampling scheme;variational frameworks;plug and play Bayesian algorithm;Bayesian frameworks;Bayes methods;Signal processing algorithms;Convergence;Deconvolution;Standards;Europe;MCMC;ns-HMC;proximity operator;myope inverse problems},\n  doi = {10.23919/EUSIPCO.2018.8553481},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437124.pdf},\n}\n\n
\n
\n\n\n
\n The emergence of efficient algorithms in variational and Bayesian frameworks braught significant advances to the field of inverse problems. However, such problems remain challenging when the observation operator is not perfectly known. In this paper we propose a Bayesian Plug-and-Play (PP) algorithm for solving a wide range of inverse problems where the signal/image is sparse in the original domain and the observation operator has to be estimated. The principle consists of plugging the prior considered for the target observation operator and keep using the same algorithm. The proposed method relies on a generic proximal non-smooth sampling scheme. This genericity makes the proposed algorithm novel in the sense that it can be used to solve a wide range or inverse problems. Our method is illustrated on a deblurring problem with unknown blur operator where promising results are obtained.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Light Field Compression of HDCA Images Combining Linear Prediction and JPEG 2000.\n \n \n \n \n\n\n \n Astola, P.; and Tabus, I.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1860-1864, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LightPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553482,\n  author = {P. Astola and I. Tabus},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Light Field Compression of HDCA Images Combining Linear Prediction and JPEG 2000},\n  year = {2018},\n  pages = {1860-1864},\n  abstract = {We have proposed under JPEG Pleno standardization activities a scheme for lenslet image compression, where the regularities and similarities existing between neighbor angular views were successfully exploited, achieving competitive results in the JPEG Pleno core experiments using lenslet data. This paper proposes improvements on our previous scheme of light field compression, making our approach more suitable for compression of light fields acquired with dense camera arrays, where the disparities between farthest views can reach several hundreds of pixels. We review the functional blocks of the compression algorithm, replacing and modifying some of the functionality with more advanced and efficient solutions. Based on our submission to the JPEG Pleno core experiments, we present and discuss our results obtained on the Fraunhofer HDCA dataset. Additionally, we present a new view merging algorithm which substantially increases the PSNR at all bit rates.},\n  keywords = {cameras;data compression;image coding;image denoising;image reconstruction;image resolution;light;light field compression;HDCA images;linear prediction;JPEG 2000;lenslet image compression;JPEG Pleno core experiments;light fields;compression algorithm;PSNR;Encoding;Transform coding;Image coding;Cameras;Quantization (signal)},\n  doi = {10.23919/EUSIPCO.2018.8553482},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437976.pdf},\n}\n\n
\n
\n\n\n
\n We have proposed under JPEG Pleno standardization activities a scheme for lenslet image compression, where the regularities and similarities existing between neighbor angular views were successfully exploited, achieving competitive results in the JPEG Pleno core experiments using lenslet data. This paper proposes improvements on our previous scheme of light field compression, making our approach more suitable for compression of light fields acquired with dense camera arrays, where the disparities between farthest views can reach several hundreds of pixels. We review the functional blocks of the compression algorithm, replacing and modifying some of the functionality with more advanced and efficient solutions. Based on our submission to the JPEG Pleno core experiments, we present and discuss our results obtained on the Fraunhofer HDCA dataset. Additionally, we present a new view merging algorithm which substantially increases the PSNR at all bit rates.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Recurrent Neural Networks with Weighting Loss for Early prediction of Hand Movements.\n \n \n \n \n\n\n \n Koch, P.; Phan, H.; Maass, M.; Katzberg, F.; Mazur, R.; and Mertins, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1152-1156, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RecurrentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553483,\n  author = {P. Koch and H. Phan and M. Maass and F. Katzberg and R. Mazur and A. Mertins},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Recurrent Neural Networks with Weighting Loss for Early prediction of Hand Movements},\n  year = {2018},\n  pages = {1152-1156},\n  abstract = {We propose in this work an approach for early prediction of hand movements using recurrent neural networks (RNNs) and a novel weighting loss. The proposed loss function leverages the outputs of an RNN at different time steps and weights their contributions to the final loss linearly over time steps. Additionally, a sample weighting scheme also constitutes a part of the weighting loss to deal with the scarcity of the samples where a change of hand movements takes place. The experiments conducted with the Ninapro database reveal that our proposed approach not only improves the performance in the early prediction task but also obtains state of the art results in classification of hand movements. These results are especially promising for the amputees.},\n  keywords = {electromyography;medical signal processing;prosthetics;recurrent neural nets;signal classification;hand movements;early prediction task;recurrent neural networks;upper limb prostheses;weighting loss;RNN;Ninapro database;electromyography;Training;Databases;Delays;Training data;Signal processing;Reliability;Delay effects;hand movement classification;hand movement prediction;electromyography;early prediction;RNN},\n  doi = {10.23919/EUSIPCO.2018.8553483},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438016.pdf},\n}\n\n
\n
\n\n\n
\n We propose in this work an approach for early prediction of hand movements using recurrent neural networks (RNNs) and a novel weighting loss. The proposed loss function leverages the outputs of an RNN at different time steps and weights their contributions to the final loss linearly over time steps. Additionally, a sample weighting scheme also constitutes a part of the weighting loss to deal with the scarcity of the samples where a change of hand movements takes place. The experiments conducted with the Ninapro database reveal that our proposed approach not only improves the performance in the early prediction task but also obtains state of the art results in classification of hand movements. These results are especially promising for the amputees.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Rate-Distortion Optimized Super-Ray Merging for Light Field Compression.\n \n \n \n \n\n\n \n Su, X.; Rizkallah, M.; Mauzev, T.; and Guillemot, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1850-1854, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Rate-DistortionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553485,\n  author = {X. Su and M. Rizkallah and T. Mauzev and C. Guillemot},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Rate-Distortion Optimized Super-Ray Merging for Light Field Compression},\n  year = {2018},\n  pages = {1850-1854},\n  abstract = {In this paper, we focus on the problem of compressing dense light fields which represent very large volumes of highly redundant data. In our scheme, view synthesis based on convolutional neural networks (CNN) is used as a first prediction step to exploit inter-view correlation. Super-rays are then constructed to capture the inter-view and spatial redundancy remaining in the prediction residues. To ensure that the super-ray segmentation is highly correlated with the residues to be encoded, the super-rays are computed on synthesized residues (the difference between the four transmitted corner views and their corresponding synthesized views), instead of the synthesized views. Neighboring super-rays are merged into a larger super-ray according to a rate-distortion cost. A 4D shape adaptive discrete cosine transform (SA - DCT) is applied per super-rayon the prediction residues in both the spatial and angular dimensions. A traditional coding scheme consisting of quantization and entropy coding is then used for encoding the transformed coefficients. Experimental results show that the proposed coding scheme outperforms HEVC-based schemes at low hitrate.},\n  keywords = {convolution;data compression;discrete cosine transforms;feedforward neural nets;rate distortion theory;video coding;synthesized residues;transmitted corner views;rate-distortion cost;super-ray merging;light field compression;dense light fields;view synthesis;inter-view correlation;super-ray segmentation;synthesized views;convolutional neural networks;CNN;HEVC-based schemes;SA-DCT;4D shape adaptive discrete cosine transform;Encoding;Image segmentation;Merging;Rate-distortion;Discrete cosine transforms;Quantization (signal);Cameras;Super-Ray (SR) Merging;Rate-Distortion Minimization;Light Field (LF) Compression;Shape-Adaptive DCT (SA-DCT)},\n  doi = {10.23919/EUSIPCO.2018.8553485},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437137.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we focus on the problem of compressing dense light fields which represent very large volumes of highly redundant data. In our scheme, view synthesis based on convolutional neural networks (CNN) is used as a first prediction step to exploit inter-view correlation. Super-rays are then constructed to capture the inter-view and spatial redundancy remaining in the prediction residues. To ensure that the super-ray segmentation is highly correlated with the residues to be encoded, the super-rays are computed on synthesized residues (the difference between the four transmitted corner views and their corresponding synthesized views), instead of the synthesized views. Neighboring super-rays are merged into a larger super-ray according to a rate-distortion cost. A 4D shape adaptive discrete cosine transform (SA - DCT) is applied per super-rayon the prediction residues in both the spatial and angular dimensions. A traditional coding scheme consisting of quantization and entropy coding is then used for encoding the transformed coefficients. Experimental results show that the proposed coding scheme outperforms HEVC-based schemes at low hitrate.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Graph Similarity based on Graph Fourier Distances.\n \n \n \n \n\n\n \n Lagunas, E.; Marques, A. G.; Chatzinotas, S.; and Ottersten, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 877-881, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GraphPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553486,\n  author = {E. Lagunas and A. G. Marques and S. Chatzinotas and B. Ottersten},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Graph Similarity based on Graph Fourier Distances},\n  year = {2018},\n  pages = {877-881},\n  abstract = {Graph theory is a branch of mathematics which is gaining momentum in the signal processing community due to their ability to efficiently represent data defined on irregular domains. Quantifying the similarity between two different graphs is a crucial operation in many applications involving graphs, such as pattern recognition or social networks' analysis. This paper focuses on the graph similarity problem from the emerging graph Fourier domain, leveraging the spectral decomposition of the Laplacian matrices. In particular, we focus on the intuition that similar graphs should provide similar frequency representation for a particular graph signal. Similarly, we argue that the frequency responses of a particular graph filter applied to two similar graphs should be also similar. Supporting results based on numerical simulations support the aforementioned hypothesis and show that the proposed graph distances provide a new tool for comparing graphs in the frequency domain.},\n  keywords = {Fourier transforms;graph theory;matrix algebra;pattern recognition;emerging graph Fourier domain;similar frequency representation;graph distances;graph fourier distances;graph theory;graph similarity problem;graph signal;graph filter;graphs;Laplace equations;Signal processing;Matrix decomposition;Eigenvalues and eigenfunctions;Sparse matrices;Frequency-domain analysis},\n  doi = {10.23919/EUSIPCO.2018.8553486},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437373.pdf},\n}\n\n
\n
\n\n\n
\n Graph theory is a branch of mathematics which is gaining momentum in the signal processing community due to their ability to efficiently represent data defined on irregular domains. Quantifying the similarity between two different graphs is a crucial operation in many applications involving graphs, such as pattern recognition or social networks' analysis. This paper focuses on the graph similarity problem from the emerging graph Fourier domain, leveraging the spectral decomposition of the Laplacian matrices. In particular, we focus on the intuition that similar graphs should provide similar frequency representation for a particular graph signal. Similarly, we argue that the frequency responses of a particular graph filter applied to two similar graphs should be also similar. Supporting results based on numerical simulations support the aforementioned hypothesis and show that the proposed graph distances provide a new tool for comparing graphs in the frequency domain.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robustifying Sequential Multiple Hypothesis Tests in Distributed Sensor Networks.\n \n \n \n \n\n\n \n Leonard, M. R.; Stiefel, M.; Fauß, M.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1622-1626, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RobustifyingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553488,\n  author = {M. R. Leonard and M. Stiefel and M. Fauß and A. M. Zoubir},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Robustifying Sequential Multiple Hypothesis Tests in Distributed Sensor Networks},\n  year = {2018},\n  pages = {1622-1626},\n  abstract = {We show how to robustify the Consensus + Innovations Matrix Sequential Probability Ratio Test against distributional uncertainties using robust estimators. Furthermore, we propose four distributed sequential tests for multiple hypotheses based on the median, the Hodges-Lehmann estimator, the M-estimator, and the sample myriad. Simulations verify the competitive performance of the proposed approach in comparison to an alternative method based on least favorable densities.},\n  keywords = {distributed sensors;estimation theory;signal processing;statistical distributions;statistical testing;robust estimators;distributed sequential tests;Hodges-Lehmann estimator;M-estimator;distributed sensor networks;distributional uncertainties;matrix sequential probability ratio test;sequential multiple hypothesis test robustification;median;Manganese;Technological innovation;Testing;Contamination;Signal processing;Europe;Uncertainty;sequential detection;multiple hypothesis testing;distributed detection;robustness;distributional uncertainties},\n  doi = {10.23919/EUSIPCO.2018.8553488},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439055.pdf},\n}\n\n
\n
\n\n\n
\n We show how to robustify the Consensus + Innovations Matrix Sequential Probability Ratio Test against distributional uncertainties using robust estimators. Furthermore, we propose four distributed sequential tests for multiple hypotheses based on the median, the Hodges-Lehmann estimator, the M-estimator, and the sample myriad. Simulations verify the competitive performance of the proposed approach in comparison to an alternative method based on least favorable densities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Expectation Propagation in Factor Graphs Involving Both Continuous and Binary Variables.\n \n \n \n \n\n\n \n Cox, M.; and De Vries, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2583-2587, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553490,\n  author = {M. Cox and B. {De Vries}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust Expectation Propagation in Factor Graphs Involving Both Continuous and Binary Variables},\n  year = {2018},\n  pages = {2583-2587},\n  abstract = {Factor graphs provide a convenient framework for automatically generating (approximate) Bayesian inference algorithms based on message passing. Examples include the sum-product algorithm (belief propagation), expectation maximization (EM), expectation propagation (EP) and variational message passing (VMP). While these message passing algorithms can be generated automatically, they depend on a library of precomputed message update rules. As a result, the applicability of the factor graph approach depends on the availability of such rules for all involved nodes. This paper describes the probit factor node for linking continuous and binary random variables in a factor graph. We derive (approximate) sum-product message update rules for this node through constrained moment matching, which leads to a robust version of the EP algorithm in which all messages are guaranteed to be proper. This enables automatic Bayesian inference in probabilistic models that involve both continuous and discrete latent variables, without the need for model-specific derivations. The usefulness of the node as a factor graph building block is demonstrated by applying it to perform Bayesian inference in a linear classification model with corrupted class labels.},\n  keywords = {Bayes methods;belief networks;expectation-maximisation algorithm;graph theory;inference mechanisms;message passing;automatic Bayesian inference;continuous variables;discrete latent variables;factor graph building block;robust expectation propagation;Bayesian inference algorithms;belief propagation;variational message passing;message passing algorithms;precomputed message update rules;factor graph approach;probit factor node;binary random variables;sum-product message update rules;EP algorithm;Signal processing algorithms;Inference algorithms;Message passing;Approximation algorithms;Hidden Markov models;Probabilistic logic;Probability density function},\n  doi = {10.23919/EUSIPCO.2018.8553490},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435402.pdf},\n}\n\n
\n
\n\n\n
\n Factor graphs provide a convenient framework for automatically generating (approximate) Bayesian inference algorithms based on message passing. Examples include the sum-product algorithm (belief propagation), expectation maximization (EM), expectation propagation (EP) and variational message passing (VMP). While these message passing algorithms can be generated automatically, they depend on a library of precomputed message update rules. As a result, the applicability of the factor graph approach depends on the availability of such rules for all involved nodes. This paper describes the probit factor node for linking continuous and binary random variables in a factor graph. We derive (approximate) sum-product message update rules for this node through constrained moment matching, which leads to a robust version of the EP algorithm in which all messages are guaranteed to be proper. This enables automatic Bayesian inference in probabilistic models that involve both continuous and discrete latent variables, without the need for model-specific derivations. The usefulness of the node as a factor graph building block is demonstrated by applying it to perform Bayesian inference in a linear classification model with corrupted class labels.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bilinear Residual Neural Network for the Identification and Forecasting of Geophysical Dynamics.\n \n \n \n \n\n\n \n Fablet, R.; Ouala, S.; and Herzet, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1477-1481, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BilinearPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553492,\n  author = {R. Fablet and S. Ouala and C. Herzet},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Bilinear Residual Neural Network for the Identification and Forecasting of Geophysical Dynamics},\n  year = {2018},\n  pages = {1477-1481},\n  abstract = {Due to the increasing availability of large-scale observations and simulation datasets, data-driven representations arise as efficient and relevant computation representations of geophysical systems for a wide range of applications, where model-driven models based on ordinary differential equations remain the state-of-the-art approaches. In this work, we investigate neural networks (NN) as physically-sound data-driven representations of such systems. Viewing Runge-Kutta methods as graphical models, we consider a residual NN architecture and introduce bilinear layers to embed non-linearities which are intrinsic features of geophysical systems. From numerical experiments for synthetic and real datasets, we demonstrate the relevance of the proposed NN-based architecture both in terms of forecasting performance and model identification.},\n  keywords = {differential equations;geophysics computing;neural nets;Runge-Kutta methods;forecasting performance;model identification;bilinear residual neural network;geophysical dynamics;large-scale observations;model-driven models;ordinary differential equations;physically-sound data-driven representations;graphical models;residual NN architecture;bilinear layers;computation representations;Runge-Kutta methods;Artificial neural networks;Forecasting;Mathematical model;Predictive models;Computational modeling;Computer architecture;Dynamical systems;neural networks;Bilinear layer;Forecasting;ODE;Runge-Kutta methods},\n  doi = {10.23919/EUSIPCO.2018.8553492},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437368.pdf},\n}\n\n
\n
\n\n\n
\n Due to the increasing availability of large-scale observations and simulation datasets, data-driven representations arise as efficient and relevant computation representations of geophysical systems for a wide range of applications, where model-driven models based on ordinary differential equations remain the state-of-the-art approaches. In this work, we investigate neural networks (NN) as physically-sound data-driven representations of such systems. Viewing Runge-Kutta methods as graphical models, we consider a residual NN architecture and introduce bilinear layers to embed non-linearities which are intrinsic features of geophysical systems. From numerical experiments for synthetic and real datasets, we demonstrate the relevance of the proposed NN-based architecture both in terms of forecasting performance and model identification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Flower and Visitor Detection System.\n \n \n \n \n\n\n \n Tran, D. T.; Høye, T. T.; Gabbouj, M.; and Iosifidis, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 405-409, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553494,\n  author = {D. T. Tran and T. T. Høye and M. Gabbouj and A. Iosifidis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic Flower and Visitor Detection System},\n  year = {2018},\n  pages = {405-409},\n  abstract = {The visit patterns of insects to specific flowers at specific times during the diurnal cycle and across the season play important roles in pollination biology. Thus, the ability to automatically detect flowers and visitors occurring in video sequences greatly reduces the manual human efforts needed to collect such data. Data-dependent approaches, such as supervised machine learning algorithms, have become the core component in several automation systems. In this paper, we describe a flower and visitor detection system using deep Convolutional Neural Networks (CNN). Experiments conducted in image sequences collected during field work in Greenland during June-July 2017 indicate that the system is robust to different shading and illumination conditions, inherent in the images collected in the outdoor environments.},\n  keywords = {image sequences;learning (artificial intelligence);neural nets;visitor detection system;season play important roles;pollination biology;video sequences;manual human efforts;data-dependent approaches;automation systems;deep Convolutional Neural Networks;image sequences;Image segmentation;Training;Task analysis;Insects;Image resolution;Signal processing;Lighting},\n  doi = {10.23919/EUSIPCO.2018.8553494},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437946.pdf},\n}\n\n
\n
\n\n\n
\n The visit patterns of insects to specific flowers at specific times during the diurnal cycle and across the season play important roles in pollination biology. Thus, the ability to automatically detect flowers and visitors occurring in video sequences greatly reduces the manual human efforts needed to collect such data. Data-dependent approaches, such as supervised machine learning algorithms, have become the core component in several automation systems. In this paper, we describe a flower and visitor detection system using deep Convolutional Neural Networks (CNN). Experiments conducted in image sequences collected during field work in Greenland during June-July 2017 indicate that the system is robust to different shading and illumination conditions, inherent in the images collected in the outdoor environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Task-Driven Dictionary Learning based on Convolutional Neural Network Features.\n \n \n \n \n\n\n \n Tirer, T.; and Giryes, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1885-1889, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Task-DrivenPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553495,\n  author = {T. Tirer and R. Giryes},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Task-Driven Dictionary Learning based on Convolutional Neural Network Features},\n  year = {2018},\n  pages = {1885-1889},\n  abstract = {Modeling data as a linear combination of a few elements from a learned dictionary has been used extensively in the recent decade in many fields, such as machine learning and signal processing. The learning of the dictionary is usually performed in an unsupervised manner, which is most suitable for regression tasks. However, for other purposes, e.g. image classification, it is advantageous to learn a dictionary from the data in a supervised way. Such an approach has been referred to as task-driven dictionary learning. In this work, we integrate this approach with deep learning. We modify this strategy such that the dictionary is learned for features obtained by a convolutional neural network (CNN). The parameters of the CNN are learned simultaneously with the task-driven dictionary and with the classifier parameters.},\n  keywords = {feedforward neural nets;learning (artificial intelligence);pattern classification;task-driven dictionary learning;convolutional neural network features;learned dictionary;machine learning;regression tasks;deep learning;classifier parameters;Dictionaries;Task analysis;Signal processing algorithms;Training;Machine learning;Signal processing;Transforms},\n  doi = {10.23919/EUSIPCO.2018.8553495},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437957.pdf},\n}\n\n
\n
\n\n\n
\n Modeling data as a linear combination of a few elements from a learned dictionary has been used extensively in the recent decade in many fields, such as machine learning and signal processing. The learning of the dictionary is usually performed in an unsupervised manner, which is most suitable for regression tasks. However, for other purposes, e.g. image classification, it is advantageous to learn a dictionary from the data in a supervised way. Such an approach has been referred to as task-driven dictionary learning. In this work, we integrate this approach with deep learning. We modify this strategy such that the dictionary is learned for features obtained by a convolutional neural network (CNN). The parameters of the CNN are learned simultaneously with the task-driven dictionary and with the classifier parameters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved Pairwise Embedding for High-Fidelity Reversible Data Hiding.\n \n \n \n \n\n\n \n Dragoi, I. C.; and Coltuc, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1412-1416, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553496,\n  author = {I. C. Dragoi and D. Coltuc},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved Pairwise Embedding for High-Fidelity Reversible Data Hiding},\n  year = {2018},\n  pages = {1412-1416},\n  abstract = {Pairwise reversible data hiding (RDH) restricts the embedding to 3 combinations of bits per pixel pair (“00”, “01”, “10”), by eliminating the embedding of “1” into both pixels. The gain in quality is significant and the loss in embedding bitrate is compensated by embedding into previously shifted pairs. This restriction requires a special coding procedure to format the encrypted hidden data. This paper proposes a new set of embedding equations for pairwise RDH. The proposed approach inserts either one or two data bits into each pair based on its type, bypassing the need for special coding. The proposed equations can be easily integrated in most pairwise reversible data hiding frameworks. They also provide more room for data embedding than their classic counterparts at the low embedding distortion required for high-fidelity RDH.},\n  keywords = {cryptography;data encapsulation;image coding;embedding distortion;high-fidelity reversible data hiding;pixel pair;pairwise embedding;high-fidelity RDH;data embedding;pairwise reversible data hiding frameworks;pairwise RDH;encrypted hidden data;Encoding;Histograms;Cryptography;Distortion;Signal processing algorithms;Complexity theory;Two dimensional displays},\n  doi = {10.23919/EUSIPCO.2018.8553496},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437799.pdf},\n}\n\n
\n
\n\n\n
\n Pairwise reversible data hiding (RDH) restricts the embedding to 3 combinations of bits per pixel pair (“00”, “01”, “10”), by eliminating the embedding of “1” into both pixels. The gain in quality is significant and the loss in embedding bitrate is compensated by embedding into previously shifted pairs. This restriction requires a special coding procedure to format the encrypted hidden data. This paper proposes a new set of embedding equations for pairwise RDH. The proposed approach inserts either one or two data bits into each pair based on its type, bypassing the need for special coding. The proposed equations can be easily integrated in most pairwise reversible data hiding frameworks. They also provide more room for data embedding than their classic counterparts at the low embedding distortion required for high-fidelity RDH.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Path Orthogonal Matching Pursuit for k-Sparse Image Reconstruction.\n \n \n \n \n\n\n \n Emerson, T. H.; Doster, T.; and Olson, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1955-1959, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PathPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553497,\n  author = {T. H. Emerson and T. Doster and C. Olson},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Path Orthogonal Matching Pursuit for k-Sparse Image Reconstruction},\n  year = {2018},\n  pages = {1955-1959},\n  abstract = {We introduce a path-augmentation step to the standard orthogonal matching pursuit algorithm. Our augmentation may be applied to any algorithm that relies on the selection and sorting of high-correlation atoms during an analysis or identification phase by generating a “path” between the two highest-correlation atoms. Here we investigate two types of path: a linear combination (Euclidean geodesic) and a construction relying on an optimal transport map (2-Wasserstein geodesic). We test our extension by generating k-sparse reconstructions of faces using an eigen-face dictionary learned from a subset of the data. We show that our method achieves lower reconstruction error for fixed sparsity levels than either orthogonal matching pursuit or generalized orthogonal matching pursuit.},\n  keywords = {compressed sensing;image reconstruction;iterative methods;time-frequency analysis;highest-correlation atoms;identification phase;high-correlation atoms;standard orthogonal matching pursuit algorithm;path-augmentation step;k-sparse image reconstruction;path orthogonal matching pursuit;generalized orthogonal matching pursuit;lower reconstruction error;eigen-face dictionary;2-Wasserstein geodesic;optimal transport map;Euclidean geodesic;linear combination;Matching pursuit algorithms;Dictionaries;Signal processing algorithms;Image reconstruction;Signal processing;Europe;Adaptive optics;matching pursuit;basis mismatch;optimal transport;k-sparse representation;signal reconstruction},\n  doi = {10.23919/EUSIPCO.2018.8553497},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438295.pdf},\n}\n\n
\n
\n\n\n
\n We introduce a path-augmentation step to the standard orthogonal matching pursuit algorithm. Our augmentation may be applied to any algorithm that relies on the selection and sorting of high-correlation atoms during an analysis or identification phase by generating a “path” between the two highest-correlation atoms. Here we investigate two types of path: a linear combination (Euclidean geodesic) and a construction relying on an optimal transport map (2-Wasserstein geodesic). We test our extension by generating k-sparse reconstructions of faces using an eigen-face dictionary learned from a subset of the data. We show that our method achieves lower reconstruction error for fixed sparsity levels than either orthogonal matching pursuit or generalized orthogonal matching pursuit.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the time-frequency reassignment of interfering modes in multicomponent FM signals.\n \n \n \n \n\n\n \n Bruni, V.; Tartaglione, M.; and Vitulano, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 722-726, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553498,\n  author = {V. Bruni and M. Tartaglione and D. Vitulano},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On the time-frequency reassignment of interfering modes in multicomponent FM signals},\n  year = {2018},\n  pages = {722-726},\n  abstract = {The paper presents a first attempt to correct the time-frequency reassignment of a multicomponent signal having non separable individual components. In particular, the case of a 2-components signal has been investigated in depth. It has been proved that the integral (along frequencies) of the spectrogram is still a multicomponent signal with specific instantaneous frequencies. As a result, the spectrogram of this signal allows us to disentangle the frequencies and recover the missing information in the non separability region. Preliminary results show that the proposed method is able to correctly reassign the information in the interference region by separating the two individual components, with a very moderate computational effort.},\n  keywords = {frequency modulation;signal processing;time-frequency analysis;specific instantaneous frequencies;2-components signal;nonseparable individual components;multicomponent signal;multicomponent FM signals;interfering modes;time-frequency reassignment;nonseparability region;spectrogram;Time-frequency analysis;Spectrogram;Interference;Frequency modulation;Frequency estimation;Microsoft Windows;Transforms;Time-frequency reassignment;multicomponent signals;instantaneous frequency;spectrogram},\n  doi = {10.23919/EUSIPCO.2018.8553498},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437970.pdf},\n}\n\n
\n
\n\n\n
\n The paper presents a first attempt to correct the time-frequency reassignment of a multicomponent signal having non separable individual components. In particular, the case of a 2-components signal has been investigated in depth. It has been proved that the integral (along frequencies) of the spectrogram is still a multicomponent signal with specific instantaneous frequencies. As a result, the spectrogram of this signal allows us to disentangle the frequencies and recover the missing information in the non separability region. Preliminary results show that the proposed method is able to correctly reassign the information in the interference region by separating the two individual components, with a very moderate computational effort.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Identification and Localization of a Speaker in Adverse Conditions Using a Microphone Array.\n \n \n \n \n\n\n \n Salvati, D.; Drioli, C.; and Foresti, G. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 21-25, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553501,\n  author = {D. Salvati and C. Drioli and G. L. Foresti},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Identification and Localization of a Speaker in Adverse Conditions Using a Microphone Array},\n  year = {2018},\n  pages = {21-25},\n  abstract = {We discuss a joint identification and localization microphone array system based on diagonal unloading (DU) beamforming, which has been recently introduced for acoustic source localization. First, we propose a DU beamformer version for the signal enhancement problem. Then, we propose a enhanced DU steered response power (SRP), in which the first estimate of the source position is further refined with the information gathered from the speaker recognition module. The enhanced SRP-DU is obtained by weighting the frequency components with respect to the spectral characteristics of the speaker. The approach does not add significant computational load to the array processing. Experiments conducted in noisy and reverberant conditions show that the use of the DU beamformer provides better speaker recognition performance if compared to the conventional one since it reduces deleterious effects due to the spatially white noise and point-source interferences. Simulations also show that the speaker identification can improve the localization accuracy, and it is thus interesting for applications and systems which rely on Intearated localization and speaker identification.},\n  keywords = {acoustic signal processing;array signal processing;correlation methods;direction-of-arrival estimation;microphone arrays;reverberation;signal classification;speaker recognition;white noise;joint identification;adverse conditions;localization microphone array system;diagonal unloading beamforming;acoustic source localization;DU beamformer version;signal enhancement problem;enhanced DU;response power;source position;speaker recognition module;enhanced SRP-DU;frequency components;spectral characteristics;array processing;noisy conditions;reverberant conditions;speaker recognition performance;point-source interferences;speaker identification;localization accuracy;Intearated localization;computational load;Array signal processing;Microphone arrays;Covariance matrices;Mel frequency cepstral coefficient;Direction-of-arrival estimation;Acoustic source localization;speaker identification;beamforming;diagonal unloading;microphone array},\n  doi = {10.23919/EUSIPCO.2018.8553501},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438233.pdf},\n}\n\n
\n
\n\n\n
\n We discuss a joint identification and localization microphone array system based on diagonal unloading (DU) beamforming, which has been recently introduced for acoustic source localization. First, we propose a DU beamformer version for the signal enhancement problem. Then, we propose a enhanced DU steered response power (SRP), in which the first estimate of the source position is further refined with the information gathered from the speaker recognition module. The enhanced SRP-DU is obtained by weighting the frequency components with respect to the spectral characteristics of the speaker. The approach does not add significant computational load to the array processing. Experiments conducted in noisy and reverberant conditions show that the use of the DU beamformer provides better speaker recognition performance if compared to the conventional one since it reduces deleterious effects due to the spatially white noise and point-source interferences. Simulations also show that the speaker identification can improve the localization accuracy, and it is thus interesting for applications and systems which rely on Intearated localization and speaker identification.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Novel Method for Topological Embedding of Time-Series Data.\n \n \n \n \n\n\n \n Kennedy, S. M.; Roth, J. D.; and Scrofani, J. W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2350-2354, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553502,\n  author = {S. M. Kennedy and J. D. Roth and J. W. Scrofani},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Novel Method for Topological Embedding of Time-Series Data},\n  year = {2018},\n  pages = {2350-2354},\n  abstract = {In this paper, we propose a novel method for embedding one-dimensional, periodic time-series data into higher-dimensional topological spaces to support robust recovery of signal features via topological data analysis under noisy sampling conditions. Our method can be considered an extension of the popular time delay embedding method to a larger class of linear operators. To provide evidence for the viability of this method, we analyze the simple case of sinusoidal data in three steps. First, we discuss some of the drawbacks of the time delay embedding framework in the context of periodic, sinusoidal data. Next, we show analytically that using the Hilbert transform as an alternative embedding function for sinusoidal data overcomes these drawbacks. Finally, we provide empirical evidence of the viability of the Hilbert transform as an embedding function when the parameters of the sinusoidal data vary over time.},\n  keywords = {data analysis;Hilbert transforms;time series;topology;topological embedding;one-dimensional time-series data;periodic time-series data;higher-dimensional topological spaces;signal features;topological data analysis;noisy sampling conditions;linear operators;time delay embedding framework;periodic data;alternative embedding function;sinusoidal data overcomes;Hilbert transform;Noise measurement;Delays;Three-dimensional displays;Delay effects;Signal resolution;Frequency measurement},\n  doi = {10.23919/EUSIPCO.2018.8553502},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433991.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a novel method for embedding one-dimensional, periodic time-series data into higher-dimensional topological spaces to support robust recovery of signal features via topological data analysis under noisy sampling conditions. Our method can be considered an extension of the popular time delay embedding method to a larger class of linear operators. To provide evidence for the viability of this method, we analyze the simple case of sinusoidal data in three steps. First, we discuss some of the drawbacks of the time delay embedding framework in the context of periodic, sinusoidal data. Next, we show analytically that using the Hilbert transform as an alternative embedding function for sinusoidal data overcomes these drawbacks. Finally, we provide empirical evidence of the viability of the Hilbert transform as an embedding function when the parameters of the sinusoidal data vary over time.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Computational Diagnosis of Parkinson's Disease from Speech Based on Regularization Methods.\n \n \n \n \n\n\n \n Camnos-Roca, Y.; Calle-Alonso, F.; Perez, C. J.; and Naranjo, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1127-1131, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ComputationalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553505,\n  author = {Y. Camnos-Roca and F. Calle-Alonso and C. J. Perez and L. Naranjo},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Computational Diagnosis of Parkinson's Disease from Speech Based on Regularization Methods},\n  year = {2018},\n  pages = {1127-1131},\n  abstract = {A computational tool to discriminate healthy people from people with Parkinson's Disease (PD) is proposed based on acoustic features extracted from sustained vowel recordings. Several approaches based on different feature sets and regularization methods (LASSO, Ridge, and Elastic net) are experimentally compared. The effectiveness of these methods has been evaluated on a dataset containing acoustic features of 40 healthy people and 40 patients with PD, who have been recruited at the Regional Association for Parkinson's Disease in Extremadura (Spain). The results show relevant differences when varying the initial feature set but high stability when changing the regularization approach. The three considered methods have achieved very promising classification accuracy rates via 10-fold cross-validation analysis, reaching 88.5 %.},\n  keywords = {acoustic signal detection;audio recording;diseases;feature extraction;learning (artificial intelligence);medical computing;patient diagnosis;signal classification;speech processing;computational diagnosis;PD;Parkinsons disease;feature set;acoustic features extraction;healthy people;vowel recordings;regional association;Spain;cross-validation analysis;speech based regularization methods;Feature extraction;Perturbation methods;Entropy;Acoustics;Parkinson's disease;Acoustic measurements;Signal to noise ratio;Acoustic features;Elastic net;Least absolute shrinkage and selection operator;Nonlinear speech signal processing;Parkinson's disease;Regularized regression;Ridge},\n  doi = {10.23919/EUSIPCO.2018.8553505},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436203.pdf},\n}\n\n
\n
\n\n\n
\n A computational tool to discriminate healthy people from people with Parkinson's Disease (PD) is proposed based on acoustic features extracted from sustained vowel recordings. Several approaches based on different feature sets and regularization methods (LASSO, Ridge, and Elastic net) are experimentally compared. The effectiveness of these methods has been evaluated on a dataset containing acoustic features of 40 healthy people and 40 patients with PD, who have been recruited at the Regional Association for Parkinson's Disease in Extremadura (Spain). The results show relevant differences when varying the initial feature set but high stability when changing the regularization approach. The three considered methods have achieved very promising classification accuracy rates via 10-fold cross-validation analysis, reaching 88.5 %.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dynamics and Periodicity Based Multirate Fast Transient-Sound Detection.\n \n \n \n \n\n\n \n Yang, J.; and Hilmes, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2449-2453, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DynamicsPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553506,\n  author = {J. Yang and P. Hilmes},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Dynamics and Periodicity Based Multirate Fast Transient-Sound Detection},\n  year = {2018},\n  pages = {2449-2453},\n  abstract = {This paper proposes an efficient real-time multirate fast transient-sound detection algorithm on the basis of emerging microphone array configuration intended for multimedia signal processing application systems such as digital smart home. The proposed detection algorithm first extracts the dynamics and periodicity features, then trains the model parameters of these features on Amazon machine learning platform. The real-time testing results have shown that the proposed algorithm with the trained model parameters can not only achieve the optimum detection performance in all various noisy conditions but also reject all kinds of interferences including undesired voice and other unrelated transient-sounds. In comparison with the existing algorithms, the proposed detection algorithm significantly improves the false negative and false positive performance. In addition, the proposed multirate strategy dramatically reduces the computational complexity and processing latency so that the proposed algorithm can serve as a much more practical solution for the digital smart home related applications.},\n  keywords = {acoustic signal processing;filtering theory;home automation;learning (artificial intelligence);microphone arrays;signal detection;microphone array configuration;multimedia signal;periodicity features;Amazon machine learning platform;optimum detection performance;false negative performance;false positive performance;multirate strategy;digital smart home related applications;realtime multirate fast transient-sound detection algorithm;Signal processing algorithms;Microphones;Heuristic algorithms;Feature extraction;Signal processing;Smart homes;Transient analysis;feature extraction;fast transient-sound detection;sound source localization;digital-positioning system;smart home},\n  doi = {10.23919/EUSIPCO.2018.8553506},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435103.pdf},\n}\n\n
\n
\n\n\n
\n This paper proposes an efficient real-time multirate fast transient-sound detection algorithm on the basis of emerging microphone array configuration intended for multimedia signal processing application systems such as digital smart home. The proposed detection algorithm first extracts the dynamics and periodicity features, then trains the model parameters of these features on Amazon machine learning platform. The real-time testing results have shown that the proposed algorithm with the trained model parameters can not only achieve the optimum detection performance in all various noisy conditions but also reject all kinds of interferences including undesired voice and other unrelated transient-sounds. In comparison with the existing algorithms, the proposed detection algorithm significantly improves the false negative and false positive performance. In addition, the proposed multirate strategy dramatically reduces the computational complexity and processing latency so that the proposed algorithm can serve as a much more practical solution for the digital smart home related applications.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An entropy-based approach for shape description.\n \n \n \n \n\n\n \n Bruni, V.; Cioppa, L. D.; and Vitulano, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 603-607, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553507,\n  author = {V. Bruni and L. D. Cioppa and D. Vitulano},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An entropy-based approach for shape description},\n  year = {2018},\n  pages = {603-607},\n  abstract = {In this paper an automatic method for the selection of those Fourier descriptors which better correlate a 2D shape contour is presented. To this aim, shape description has been modeled as a non linear approximation problem and a strict relationship between transform entropy and the sorted version of the transformed analysed boundary is derived. As a result, Fourier descriptors are selected in a hierarchical way and the minimum number of coefficients able to give a nearly optimal shape boundary representation is automatically derived. The latter maximizes an entropic interpretation of a complexity-based similarity measure, i.e. the normalized information distance. Preliminary experimental results show that the proposed method is able to provide a compact and computationally effective description of shape boundary which guarantees a nearly optimal matching with the original one.},\n  keywords = {approximation theory;edge detection;entropy;feature extraction;shape recognition;entropy-based approach;shape description;automatic method;Fourier descriptors;2D shape contour;nonlinear approximation problem;transformed analysed boundary;optimal shape boundary representation;complexity-based similarity measure;preliminary experimental results;compact description;computationally effective description;optimal matching;Shape;Entropy;Linear approximation;Complexity theory;Sorting;Image reconstruction;Europe;Shape representation;Fourier descriptors;non linear approximation;differential entropy;normalized information distance (NID)},\n  doi = {10.23919/EUSIPCO.2018.8553507},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438057.pdf},\n}\n\n
\n
\n\n\n
\n In this paper an automatic method for the selection of those Fourier descriptors which better correlate a 2D shape contour is presented. To this aim, shape description has been modeled as a non linear approximation problem and a strict relationship between transform entropy and the sorted version of the transformed analysed boundary is derived. As a result, Fourier descriptors are selected in a hierarchical way and the minimum number of coefficients able to give a nearly optimal shape boundary representation is automatically derived. The latter maximizes an entropic interpretation of a complexity-based similarity measure, i.e. the normalized information distance. Preliminary experimental results show that the proposed method is able to provide a compact and computationally effective description of shape boundary which guarantees a nearly optimal matching with the original one.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n A Fresh Look at the Semiparametric Cramér-Rao Bound.\n \n \n \n\n\n \n Fortunati, S.; Gini, F.; Greco, M.; Zoubir, A. M.; and Rangaswamy, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 261-265, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553508,\n  author = {S. Fortunati and F. Gini and M. Greco and A. M. Zoubir and M. Rangaswamy},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Fresh Look at the Semiparametric Cramér-Rao Bound},\n  year = {2018},\n  pages = {261-265},\n  abstract = {This paper aims at providing a fresh look at semiparametric estimation theory and, in particular, at the Semiparametric Cramér-Rao Bound (SCRB). Semiparametric models are characterized by a finite-dimensional parameter vector of interest and by an infinite-dimensional nuisance function that is often related to an unspecified functional form of the density of the noise underlying the observations. We summarize the main motivations and the intuitive concepts about semi parametric models. Then we provide a new look at the classical estimation theory based on a geometrical Hilbert space-based approach. Finally, the semiparametric version of the Cramer- Rao Bound for the estimation of the finite-dimensional vector of the parameters of interest is provided.},\n  keywords = {estimation theory;geometry;Hilbert spaces;vectors;SCRB;finite-dimensional parameter vector;geometrical Hilbert space-based approach;semiparametric Cramer-Rao bound;infinite-dimensional nuisance function;semiparametric estimation theory;Parametric statistics;Hilbert space;Probability density function;Data models;Estimation theory;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553508},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n This paper aims at providing a fresh look at semiparametric estimation theory and, in particular, at the Semiparametric Cramér-Rao Bound (SCRB). Semiparametric models are characterized by a finite-dimensional parameter vector of interest and by an infinite-dimensional nuisance function that is often related to an unspecified functional form of the density of the noise underlying the observations. We summarize the main motivations and the intuitive concepts about semi parametric models. Then we provide a new look at the classical estimation theory based on a geometrical Hilbert space-based approach. Finally, the semiparametric version of the Cramer- Rao Bound for the estimation of the finite-dimensional vector of the parameters of interest is provided.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Hybrid Analog-Digital Precoding for Interference Exploitation (Invited Paper).\n \n \n \n\n\n \n Li, A.; Masouros, C.; and Liu, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 812-816, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553509,\n  author = {A. Li and C. Masouros and F. Liu},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Hybrid Analog-Digital Precoding for Interference Exploitation (Invited Paper)},\n  year = {2018},\n  pages = {812-816},\n  abstract = {We study the multi-user massive multiple-input-single-output (MISO) and focus on the downlink systems where the base station (BS) employs hybrid analog-digital precoding with low-cost 1-bit digital-to-analog converters (DACs). In this paper, we propose a hybrid downlink transmission scheme where the analog precoder is formed based on the SVD decomposition. In the digital domain, instead of designing a linear transmit precoding matrix, we directly design the transmit signals by exploiting the concept of constructive interference. The optimization problem is then formulated based on the geometry of the modulation constellations and is shown to be non-convex. We relax the above optimization and show that the relaxed optimization can be transformed into a linear programming that can be efficiently solved. Numerical results validate the superiority of the proposed scheme for the hybrid massive MIMO downlink systems.},\n  keywords = {digital-analogue conversion;linear programming;MIMO communication;MISO communication;precoding;radiofrequency interference;hybrid analog-digital precoding;interference exploitation;multiuser massive multiple-input-single-output;1-bit digital-to-analog converters;hybrid downlink transmission scheme;analog precoder;digital domain;linear transmit precoding matrix;hybrid massive MIMO downlink systems;Precoding;Optimization;Radio frequency;Interference;Downlink;MIMO communication;Power demand;Massive MIMO;1-bit quantization;hybrid precoding;constructive interference;downlink},\n  doi = {10.23919/EUSIPCO.2018.8553509},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n We study the multi-user massive multiple-input-single-output (MISO) and focus on the downlink systems where the base station (BS) employs hybrid analog-digital precoding with low-cost 1-bit digital-to-analog converters (DACs). In this paper, we propose a hybrid downlink transmission scheme where the analog precoder is formed based on the SVD decomposition. In the digital domain, instead of designing a linear transmit precoding matrix, we directly design the transmit signals by exploiting the concept of constructive interference. The optimization problem is then formulated based on the geometry of the modulation constellations and is shown to be non-convex. We relax the above optimization and show that the relaxed optimization can be transformed into a linear programming that can be efficiently solved. Numerical results validate the superiority of the proposed scheme for the hybrid massive MIMO downlink systems.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dual-Channel VTS Feature Compensation with Improved Posterior Estimation.\n \n \n \n \n\n\n \n López-Espejo, I.; Peinado, A. M.; Gomez, A. M.; González, J. A.; and Prieto-Calero, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2065-2069, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Dual-ChannelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553510,\n  author = {I. López-Espejo and A. M. Peinado and A. M. Gomez and J. A. González and S. Prieto-Calero},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Dual-Channel VTS Feature Compensation with Improved Posterior Estimation},\n  year = {2018},\n  pages = {2065-2069},\n  abstract = {The use of dual-microphones is a powerful tool for noise-robust automatic speech recognition (ASR). In particular, it allows the reformulation of classical techniques like vector Taylor series (VTS) feature compensation. In this work, we consider a critical issue of VTS compensation such as posterior computation and propose an alternative way to estimate more accurately these probabilities when VTS is applied to enhance noisy speech captured by dual-microphone mobile devices. Our proposal models the conditional dependence of a noisy secondary channel given a primary one not only to outperform single-channel VTS feature compensation, but also a previous dual-channel VTS approach based on a stacked formulation. This is confirmed by recognition experiments on two different dual-channel extensions of the Aurora-2 corpus. Such extensions emulate the use of a dual-microphone smartphone in close- and far-talk conditions, obtaining our proposal relevant improvements in the latter case.},\n  keywords = {microphones;probability;series (mathematics);speech recognition;vector Taylor series;VTS compensation;posterior computation;noisy speech;dual-microphone mobile devices;noisy secondary channel;single-channel VTS;dual-channel VTS approach;dual-channel extensions;dual-microphone smartphone;proposal relevant improvements;dual-channel VTS feature compensation;improved posterior estimation;classical techniques;noise-robust automatic speech recognition;size 2.0 A;Hidden Markov models;Noise measurement;Distortion;Covariance matrices;Acoustics;Microphones;Mobile handsets;VTS feature compensation;Posterior probability;Robust ASR;Dual-channel;Mobile device},\n  doi = {10.23919/EUSIPCO.2018.8553510},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570427585.pdf},\n}\n\n
\n
\n\n\n
\n The use of dual-microphones is a powerful tool for noise-robust automatic speech recognition (ASR). In particular, it allows the reformulation of classical techniques like vector Taylor series (VTS) feature compensation. In this work, we consider a critical issue of VTS compensation such as posterior computation and propose an alternative way to estimate more accurately these probabilities when VTS is applied to enhance noisy speech captured by dual-microphone mobile devices. Our proposal models the conditional dependence of a noisy secondary channel given a primary one not only to outperform single-channel VTS feature compensation, but also a previous dual-channel VTS approach based on a stacked formulation. This is confirmed by recognition experiments on two different dual-channel extensions of the Aurora-2 corpus. Such extensions emulate the use of a dual-microphone smartphone in close- and far-talk conditions, obtaining our proposal relevant improvements in the latter case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind Detection and Localization of Video Temporal Splicing Exploiting Sensor-Based Footprints.\n \n \n \n \n\n\n \n Mandelli, S.; Bestagini, P.; Tubaro, S.; Cozzolino, D.; and Verdoliva, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1362-1366, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553511,\n  author = {S. Mandelli and P. Bestagini and S. Tubaro and D. Cozzolino and L. Verdoliva},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Blind Detection and Localization of Video Temporal Splicing Exploiting Sensor-Based Footprints},\n  year = {2018},\n  pages = {1362-1366},\n  abstract = {In recent years, the possibility of easily editing video sequences led to the diffusion of user generated video compilations obtained by splicing together in time different video shots. In order to perform forensic analysis on this kind of videos, it can be useful to split the whole sequence into the set of originating shots. As video shots are seldom obtained with a single device, a possible way to identify each video shot is to exploit sensor-based traces. State-of-the-art solutions for sensor attribution rely on Photo Response Non Uniformity (PRNU). Despite this approach has proved robust and efficient for images, exploiting PRNU in the video domain is still challenging. In this paper, we tackle the problem of blind video temporal splicing detection leveraging PRNU-based source attribution. Specifically, we consider videos composed by few-second shots coming from various sources that have been temporally combined. The focus is on blind detection and temporal localization of splicing points. The analysis is carried out on a recently released dataset composed by videos acquired with mobile devices. The method is validated on both non-stabilized and stabilized videos, thus showing the difficulty of working in the latter scenario.},\n  keywords = {digital forensics;feature extraction;image sensors;image sequences;video cameras;video signal processing;video shots;sensor-based footprints;blind video temporal splicing detection;PRNU-based source attribution;user generated video compilations;video sequences;temporal localization;blind detection;video domain;sensor attribution;sensor-based traces;video shot;originating shots;Splicing;Cameras;Task analysis;Reliability;Correlation;Feature extraction;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553511},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437388.pdf},\n}\n\n
\n
\n\n\n
\n In recent years, the possibility of easily editing video sequences led to the diffusion of user generated video compilations obtained by splicing together in time different video shots. In order to perform forensic analysis on this kind of videos, it can be useful to split the whole sequence into the set of originating shots. As video shots are seldom obtained with a single device, a possible way to identify each video shot is to exploit sensor-based traces. State-of-the-art solutions for sensor attribution rely on Photo Response Non Uniformity (PRNU). Despite this approach has proved robust and efficient for images, exploiting PRNU in the video domain is still challenging. In this paper, we tackle the problem of blind video temporal splicing detection leveraging PRNU-based source attribution. Specifically, we consider videos composed by few-second shots coming from various sources that have been temporally combined. The focus is on blind detection and temporal localization of splicing points. The analysis is carried out on a recently released dataset composed by videos acquired with mobile devices. The method is validated on both non-stabilized and stabilized videos, thus showing the difficulty of working in the latter scenario.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Optimal Filtering for Speech Decomposition.\n \n \n \n \n\n\n \n Jaramillo, A. E.; Nielsen, J. K.; and Christensen, M. G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2325-2329, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553512,\n  author = {A. E. Jaramillo and J. K. Nielsen and M. G. Christensen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On Optimal Filtering for Speech Decomposition},\n  year = {2018},\n  pages = {2325-2329},\n  abstract = {Optimal linear filtering has been used extensively for speech enhancement. In this paper, we take a first step in trying to apply linear filtering to the decomposition of a noisy speech signal into its components. The problem of decomposing speech into its voiced and unvoiced components is considered as an estimation problem. Assuming a harmonic model for the voiced speech, we propose a Wiener filtering scheme which estimates both components separately in the presence of noise. It is shown under which conditions this optimal filtering formulation outperforms two state-of-the-art speech decomposition methods, which is also revealed by objective measures, spectrograms and informal listening tests.},\n  keywords = {decomposition;estimation theory;harmonic analysis;speech enhancement;Wiener filters;informal listening testing;spectrograms;voiced speech harmonic model;noisy speech signal decomposition methods;Wiener filtering scheme;estimation problem;unvoiced components;voiced components;speech enhancement;optimal linear filtering;Harmonic analysis;Noise measurement;Speech processing;Covariance matrices;Spectrogram;Estimation;Europe;Speech decomposition;time-domain filtering;Wiener filter;voiced speech;unvoiced speech},\n  doi = {10.23919/EUSIPCO.2018.8553512},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437959.pdf},\n}\n\n
\n
\n\n\n
\n Optimal linear filtering has been used extensively for speech enhancement. In this paper, we take a first step in trying to apply linear filtering to the decomposition of a noisy speech signal into its components. The problem of decomposing speech into its voiced and unvoiced components is considered as an estimation problem. Assuming a harmonic model for the voiced speech, we propose a Wiener filtering scheme which estimates both components separately in the presence of noise. It is shown under which conditions this optimal filtering formulation outperforms two state-of-the-art speech decomposition methods, which is also revealed by objective measures, spectrograms and informal listening tests.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Vector of Locally Aggregated Descriptors Framework for Action Recognition on Motion Capture Data.\n \n \n \n \n\n\n \n Kapsouras, I.; and Nikolaidis, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1785-1789, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553513,\n  author = {I. Kapsouras and N. Nikolaidis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Vector of Locally Aggregated Descriptors Framework for Action Recognition on Motion Capture Data},\n  year = {2018},\n  pages = {1785-1789},\n  abstract = {In this paper we introduce an approach for action recognition in motion capture data. The data are represented by the joints positions of the skeleton in each frame (posture vectors) and the differences of these positions over time, in different temporal scales. The Vector of Locally Aggregated Descriptors (VLAD) framework is used to encode the extracted features whereas a Support Vector Machine (SVM) is used for classification. A voting scheme is used in the VLAD framework to achieve soft encoding. The effectiveness and robustness of the proposed approach is shown in experiments performed on three datasets (MSRAction3D, MSRActionPairs and HDM05).},\n  keywords = {feature extraction;image classification;image motion analysis;support vector machines;vectors;VLAD framework;action recognition;motion capture data;posture vectors;support vector machine;vector of locally aggregated descriptors framework;soft encoding;voting scheme;feature extraction;classification;Feature extraction;Skeleton;Support vector machines;Three-dimensional displays;Trajectory;Kernel;Encoding},\n  doi = {10.23919/EUSIPCO.2018.8553513},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437305.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we introduce an approach for action recognition in motion capture data. The data are represented by the joints positions of the skeleton in each frame (posture vectors) and the differences of these positions over time, in different temporal scales. The Vector of Locally Aggregated Descriptors (VLAD) framework is used to encode the extracted features whereas a Support Vector Machine (SVM) is used for classification. A voting scheme is used in the VLAD framework to achieve soft encoding. The effectiveness and robustness of the proposed approach is shown in experiments performed on three datasets (MSRAction3D, MSRActionPairs and HDM05).\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Beamforming-Based Acoustic Source Localization and Enhancement for Multirotor UAVs.\n \n \n \n \n\n\n \n Salvati, D.; Drioli, C.; Ferrin, G.; and Foresti, G. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 987-991, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Beamforming-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553514,\n  author = {D. Salvati and C. Drioli and G. Ferrin and G. L. Foresti},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Beamforming-Based Acoustic Source Localization and Enhancement for Multirotor UAVs},\n  year = {2018},\n  pages = {987-991},\n  abstract = {The problem of acoustic source localization and signal enhancement through beamforming techniques is especially challenging when the acoustic recording is performed using microphone arrays installed on multirotor unmanned aerial vehicles (UAVs), The principal source of disturbances in this class of devices is given by the high frequency, narrowband noise originated by the electrical engines, and by the broadband aerodynamic noise induced by the propellers. A solution to this problem is investigated, which employs an efficient beamforming-based spectral distance response algorithm for both localization and enhancement of the source. The beamforming relies on a diagonal unloading (DU) transformation. The proposed algorithm was tested on a multirotor micro aerial vehicle (MAV) equipped with a compact uniform linear array (ULA) of four microphones, perpendicular to the rear-front axis of the drone. The array is positioned slightly above the plane of propellers, and centered with respect to the drone body. The experimental results conducted in stable hovering conditions are illustrated, and the localization and signal enhancing performances are reported under various noise conditions and source characteristics.},\n  keywords = {acoustic noise;acoustic signal processing;aerodynamics;array signal processing;autonomous aerial vehicles;direction-of-arrival estimation;microphone arrays;acoustic source localization;signal enhancement;beamforming techniques;acoustic recording;microphone arrays;multirotor unmanned aerial vehicles;narrowband noise;electrical engines;broadband aerodynamic noise;efficient beamforming-based spectral distance response algorithm;diagonal unloading transformation;multirotor microaerial vehicle;compact uniform linear array;signal enhancing performances;noise conditions;multirotor UAV;Acoustics;Array signal processing;Propellers;Microphone arrays;Spatial resolution;Acoustic arrays;Acoustic source localization;signal enhancement;beamforming;spectral distance;unmanned aerial vehicles (UAV);micro aerial vehicle (MAV)},\n  doi = {10.23919/EUSIPCO.2018.8553514},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438195.pdf},\n}\n\n
\n
\n\n\n
\n The problem of acoustic source localization and signal enhancement through beamforming techniques is especially challenging when the acoustic recording is performed using microphone arrays installed on multirotor unmanned aerial vehicles (UAVs), The principal source of disturbances in this class of devices is given by the high frequency, narrowband noise originated by the electrical engines, and by the broadband aerodynamic noise induced by the propellers. A solution to this problem is investigated, which employs an efficient beamforming-based spectral distance response algorithm for both localization and enhancement of the source. The beamforming relies on a diagonal unloading (DU) transformation. The proposed algorithm was tested on a multirotor micro aerial vehicle (MAV) equipped with a compact uniform linear array (ULA) of four microphones, perpendicular to the rear-front axis of the drone. The array is positioned slightly above the plane of propellers, and centered with respect to the drone body. The experimental results conducted in stable hovering conditions are illustrated, and the localization and signal enhancing performances are reported under various noise conditions and source characteristics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Facet-Based Regularization for Scalable Radio-Interferometric Imaging.\n \n \n \n \n\n\n \n Naghibzadeh, S.; Repetti, A.; van der Veen , A.; and Wiaux, Y.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2678-2682, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Facet-BasedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553515,\n  author = {S. Naghibzadeh and A. Repetti and A. {van der Veen} and Y. Wiaux},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Facet-Based Regularization for Scalable Radio-Interferometric Imaging},\n  year = {2018},\n  pages = {2678-2682},\n  abstract = {Current and future radio telescopes deal with large volumes of data and are expected to generate high resolution gigapixel-size images. The imaging problem in radio interferometry is highly ill-posed and the choice of prior model of the sky is of utmost importance to guarantee a reliable reconstruction. Traditionally, one or more regularization terms (e.g. sparsity and positivity) are applied for the complete image. However, radio sky images can often contain individual source facets in a large empty background. More precisely, we propose to divide radio images into source occupancy regions (facets) and apply relevant regularizing assumptions for each facet. Leveraging a stochastic primal dual algorithm, we show the potential merits of applying facet-based regularization on the radio-interferometric images which results in both computation time and memory requirement savings.},\n  keywords = {astronomical image processing;image reconstruction;image resolution;radioastronomical techniques;radiotelescopes;radiowave interferometry;stochastic processes;radio interferometry;reliable reconstruction;regularization terms;sparsity;positivity;radio sky images;individual source facets;source occupancy regions;facet-based regularization;radio-interferometric images;scalable radio-interferometric imaging;current radio telescopes;future radio telescopes;high resolution gigapixel-size images;imaging problem;stochastic primal dual algorithm;computation time saving;memory requirement saving;Imaging;Radio astronomy;Signal processing algorithms;Image reconstruction;Optimization;Pollution measurement;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553515},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439322.pdf},\n}\n\n
\n
\n\n\n
\n Current and future radio telescopes deal with large volumes of data and are expected to generate high resolution gigapixel-size images. The imaging problem in radio interferometry is highly ill-posed and the choice of prior model of the sky is of utmost importance to guarantee a reliable reconstruction. Traditionally, one or more regularization terms (e.g. sparsity and positivity) are applied for the complete image. However, radio sky images can often contain individual source facets in a large empty background. More precisely, we propose to divide radio images into source occupancy regions (facets) and apply relevant regularizing assumptions for each facet. Leveraging a stochastic primal dual algorithm, we show the potential merits of applying facet-based regularization on the radio-interferometric images which results in both computation time and memory requirement savings.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Iterative Equalization Based on Expectation Propagation: A Frequency Domain Approach.\n \n \n \n \n\n\n \n Şahin, S.; Cipriano, A. M.; Poulliat, C.; and Boucheret, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 932-936, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IterativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553517,\n  author = {S. Şahin and A. M. Cipriano and C. Poulliat and M. Boucheret},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Iterative Equalization Based on Expectation Propagation: A Frequency Domain Approach},\n  year = {2018},\n  pages = {932-936},\n  abstract = {A novel single-carrier frequency domain equalization (SC-FDE) scheme is proposed, by extending recent developments in iterative receivers that use expectation propagation (EP), a message passing formalism for approximate Bayesian inference. Applying EP on the family of white multivariate Gaussian distributions allows deriving double-loop frequency domain receivers with single-tap equalizers. This structure enables low computational complexity implementation via Fast-Fourier transforms (FFT). Self-iterations between the equalizer and the demapper and global turbo iterations with the channel decoder provide numerous combinations for the performance and complexity trade-off. Numerical results show that the proposed structure outperforms alternative single-tap FDEs and that its achievable rates reach the channel symmetric information rate.},\n  keywords = {approximation theory;Bayes methods;channel coding;communication complexity;decoding;equalisers;fast Fourier transforms;frequency-domain analysis;Gaussian distribution;inference mechanisms;iterative methods;message passing;telecommunication computing;expectation propagation;fast-Fourier transforms;single-carrier frequency domain equalization scheme;alternative single-tap FDE;SC-FDE scheme;FFT;channel decoder;channel symmetric information rate;low computational complexity implementation;single-tap equalizers;double-loop frequency domain receivers;white multivariate Gaussian distributions;approximate Bayesian inference;message passing formalism;EP;iterative receivers;iterative equalization;global turbo iterations;Receivers;Equalizers;Decoding;Frequency-domain analysis;Message passing;Gaussian distribution;Damping},\n  doi = {10.23919/EUSIPCO.2018.8553517},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437281.pdf},\n}\n\n
\n
\n\n\n
\n A novel single-carrier frequency domain equalization (SC-FDE) scheme is proposed, by extending recent developments in iterative receivers that use expectation propagation (EP), a message passing formalism for approximate Bayesian inference. Applying EP on the family of white multivariate Gaussian distributions allows deriving double-loop frequency domain receivers with single-tap equalizers. This structure enables low computational complexity implementation via Fast-Fourier transforms (FFT). Self-iterations between the equalizer and the demapper and global turbo iterations with the channel decoder provide numerous combinations for the performance and complexity trade-off. Numerical results show that the proposed structure outperforms alternative single-tap FDEs and that its achievable rates reach the channel symmetric information rate.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparison of Interactive Subjective Methodologies for Light Field Quality Evaluation.\n \n \n \n \n\n\n \n Viola, I.; and Ebrahimi, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1865-1869, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ComparisonPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553518,\n  author = {I. Viola and T. Ebrahimi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Comparison of Interactive Subjective Methodologies for Light Field Quality Evaluation},\n  year = {2018},\n  pages = {1865-1869},\n  abstract = {The recent advances in light field acquisition and rendering technologies have attracted a lot of interest from the scientific community. Due to their large amount of data, efficient compression of light field content is of paramount importance for storage and delivery. Quality evaluation plays a major role in assessing the impact of compression on visual perception. In particular, subjective methodologies for light field quality assessment must be carefully designed to ensure reliable results. In this paper, we present and compare two different methodologies to evaluate visual quality of light field contents. Both methodologies allow users to interact with the content and to freely decide which viewpoints to visualize. However, in the second methodology a brief animation of the available viewpoints is presented prior to interaction in order to ensure the same experience for all subjects. The time and patterns of interaction of both methods are compared and analyzed through a rigorous analysis. Conclusions provide useful insights for selecting the most appropriate light field evaluation methodology.},\n  keywords = {data compression;image coding;rendering (computer graphics);visual perception;light field quality assessment;visual quality;light field content;interactive subjective methodologies;light field acquisition;rendering technologies;light field evaluation;scientific community;quality evaluation;visual perception compression;Visualization;Animation;Correlation;Signal processing;Bit rate;Europe;Rendering (computer graphics)},\n  doi = {10.23919/EUSIPCO.2018.8553518},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437693.pdf},\n}\n\n
\n
\n\n\n
\n The recent advances in light field acquisition and rendering technologies have attracted a lot of interest from the scientific community. Due to their large amount of data, efficient compression of light field content is of paramount importance for storage and delivery. Quality evaluation plays a major role in assessing the impact of compression on visual perception. In particular, subjective methodologies for light field quality assessment must be carefully designed to ensure reliable results. In this paper, we present and compare two different methodologies to evaluate visual quality of light field contents. Both methodologies allow users to interact with the content and to freely decide which viewpoints to visualize. However, in the second methodology a brief animation of the available viewpoints is presented prior to interaction in order to ensure the same experience for all subjects. The time and patterns of interaction of both methods are compared and analyzed through a rigorous analysis. Conclusions provide useful insights for selecting the most appropriate light field evaluation methodology.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Improved Protein Residue-Residue Contact Prediction Using Image Denoising Methods.\n \n \n \n \n\n\n \n Villegas-Morcillo, A.; Morales-Cordovilla, J. A.; Gomez, A. M.; and Sanchez, V.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1167-1171, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ImprovedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553519,\n  author = {A. Villegas-Morcillo and J. A. Morales-Cordovilla and A. M. Gomez and V. Sanchez},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Improved Protein Residue-Residue Contact Prediction Using Image Denoising Methods},\n  year = {2018},\n  pages = {1167-1171},\n  abstract = {A protein contact map is a simplified matrix representation of the protein structure, where the spatial proximity of two amino acid residues is reflected. Although the accurate prediction of protein inter-residue contacts from the amino acid sequence is an open problem, considerable progress has been made in recent years. This progress has been driven by the development of contact predictors that identify the coevolutionary events occurring in a protein multiple sequence alignment (MSA). However, it has been shown that these methods introduce Gaussian noise in the estimated contact map, making its reduction necessary. In this paper, we propose the use of two different Gaussian denoising approximations in order to enhance the protein contact estimation. These approaches are based on (i) sparse representations over learned dictionaries, and (ii) deep residual convolutional neural networks. The results highlight that the residual learning strategy allows a better reconstruction of the contact map, thus improving contact predictions.},\n  keywords = {bioinformatics;convolution;feedforward neural nets;Gaussian noise;image denoising;learning (artificial intelligence);matrix algebra;molecular biophysics;proteins;sequences;improved protein residue-residue contact prediction;image denoising methods;protein contact map;simplified matrix representation;protein structure;acid residues;protein inter-residue contacts;amino acid sequence;contact predictors;protein multiple sequence alignment;protein contact estimation;deep residual convolutional neural networks;residual learning strategy;Gaussian denoising approximations;Gaussian noise;sparse representations;Proteins;Dictionaries;Amino acids;Noise reduction;Noise measurement;Convolutional neural networks;Image reconstruction;Protein Contact Map;Evolutionary Coupling;Image Denoising;Sparse Representations;Dictionary Learning;Deep Convolutional Neural Networks;Residual Learning},\n  doi = {10.23919/EUSIPCO.2018.8553519},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437084.pdf},\n}\n\n
\n
\n\n\n
\n A protein contact map is a simplified matrix representation of the protein structure, where the spatial proximity of two amino acid residues is reflected. Although the accurate prediction of protein inter-residue contacts from the amino acid sequence is an open problem, considerable progress has been made in recent years. This progress has been driven by the development of contact predictors that identify the coevolutionary events occurring in a protein multiple sequence alignment (MSA). However, it has been shown that these methods introduce Gaussian noise in the estimated contact map, making its reduction necessary. In this paper, we propose the use of two different Gaussian denoising approximations in order to enhance the protein contact estimation. These approaches are based on (i) sparse representations over learned dictionaries, and (ii) deep residual convolutional neural networks. The results highlight that the residual learning strategy allows a better reconstruction of the contact map, thus improving contact predictions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Channel Non-Negative Matrix Factorization for Overlapped Acoustic Event Detection.\n \n \n \n \n\n\n \n Giannoulis, P.; Potamianos, G.; and Maragos, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 857-861, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-ChannelPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553520,\n  author = {P. Giannoulis and G. Potamianos and P. Maragos},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-Channel Non-Negative Matrix Factorization for Overlapped Acoustic Event Detection},\n  year = {2018},\n  pages = {857-861},\n  abstract = {In this paper, we propose two multi-channel extensions of non-negative matrix factorization (NMF) for acoustic event detection. The first method performs decision fusion on the activation matrices produced from independent single-channel sparse- NMF solutions. The second method is a novel extension of single-channel NMF, incorporating in its objective function a multi-channel reconstruction error and a multi-channel class sparsity term on the activation matrices produced. This class sparsity constraint is used to guarantee that the NMF solutions at a given time will contain only a few classes activated across all channels. This indirectly forces the channels to seek solutions on which they agree, thus increasing robustness. We evaluate the proposed methods on a multi-channel database of overlapping acoustic events and various background noises collected inside a smart office space. Both proposed methods outperform the single-channel baseline, with the second approach achieving a 15.4 % relative error reduction in terms of F-score.},\n  keywords = {acoustic signal detection;matrix decomposition;independent single-channel sparse-NMF solutions;decision fusion;single-channel baseline;acoustic events;multichannel database;class sparsity constraint;multichannel class sparsity term;multichannel reconstruction error;activation matrices;multichannel extensions;overlapped acoustic event detection;multichannel nonnegative matrix factorization;Dictionaries;Acoustics;Microphones;Databases;Sparse matrices;Europe;Signal processing;Acoustic event detection;multi-channel fusion;non-negative matrix factorization},\n  doi = {10.23919/EUSIPCO.2018.8553520},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439305.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose two multi-channel extensions of non-negative matrix factorization (NMF) for acoustic event detection. The first method performs decision fusion on the activation matrices produced from independent single-channel sparse- NMF solutions. The second method is a novel extension of single-channel NMF, incorporating in its objective function a multi-channel reconstruction error and a multi-channel class sparsity term on the activation matrices produced. This class sparsity constraint is used to guarantee that the NMF solutions at a given time will contain only a few classes activated across all channels. This indirectly forces the channels to seek solutions on which they agree, thus increasing robustness. We evaluate the proposed methods on a multi-channel database of overlapping acoustic events and various background noises collected inside a smart office space. Both proposed methods outperform the single-channel baseline, with the second approach achieving a 15.4 % relative error reduction in terms of F-score.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Cnn-Gru Approach to Capture Time-Frequency Pattern Interdependence for Snore Sound Classification.\n \n \n \n \n\n\n \n Wang, J.; Strömfeli, H.; and Schuller, B. W.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 997-1001, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553521,\n  author = {J. Wang and H. Strömfeli and B. W. Schuller},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Cnn-Gru Approach to Capture Time-Frequency Pattern Interdependence for Snore Sound Classification},\n  year = {2018},\n  pages = {997-1001},\n  abstract = {In this work, we propose an architecture named DualCon-vGRU Network to overcome the INTERPEECH 2017 Com-ParE Snoring sub-challenge. In this network, we devise two new models: the Dual Convolutional Layer, which is applied to a spectrogram to extract features; and the Channel Slice Model, which reprocess the extracted features. The first amalgamates an ensemble of information collected from two types of convolutional operations, with differing kernel dimension on the frequency axis and equal dimension on the time axis. Secondly, the dependencies within the convolutional layer channel axes are learnt, by feeding channel slices into a Gated Recurrent Unit (GRU) layer. By taking this approach, convolutional layers can be connected to sequential models without the use of fully connected layers. Compared with other state-of-the-art methods delivered to INTERPEECH 2017 ComParE Snoring sub-challenge, our method ranks 5th on performance of test data. Moreover, we are the only competitor to train a deep learning model solely on the provided training data, except for Baseline. The performance of our model exceeds the baseline too much.},\n  keywords = {convolution;feature extraction;learning (artificial intelligence);convolutional operations;kernel dimension;frequency axis;equal dimension;convolutional layer channel axes;channel slices;convolutional layers;INTERPEECH 2017 ComParE Snoring sub-challenge;deep learning model;CNN-GRU approach;time-frequency pattern interdependence;snore sound classification;DualCon-vGRU Network;INTERPEECH 2017 Com-ParE Snoring sub-challenge;spectrogram;channel slice model;dual convolutional layer;gated recurrent unit layer;Feature extraction;Convolution;Spectrogram;Kernel;Tensile stress;Sleep apnea;Support vector machines;DualConvGRU Network;Dual Convolutional Layers;Channel Slice Model},\n  doi = {10.23919/EUSIPCO.2018.8553521},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438692.pdf},\n}\n\n
\n
\n\n\n
\n In this work, we propose an architecture named DualCon-vGRU Network to overcome the INTERPEECH 2017 Com-ParE Snoring sub-challenge. In this network, we devise two new models: the Dual Convolutional Layer, which is applied to a spectrogram to extract features; and the Channel Slice Model, which reprocess the extracted features. The first amalgamates an ensemble of information collected from two types of convolutional operations, with differing kernel dimension on the frequency axis and equal dimension on the time axis. Secondly, the dependencies within the convolutional layer channel axes are learnt, by feeding channel slices into a Gated Recurrent Unit (GRU) layer. By taking this approach, convolutional layers can be connected to sequential models without the use of fully connected layers. Compared with other state-of-the-art methods delivered to INTERPEECH 2017 ComParE Snoring sub-challenge, our method ranks 5th on performance of test data. Moreover, we are the only competitor to train a deep learning model solely on the provided training data, except for Baseline. The performance of our model exceeds the baseline too much.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Data Hiding Scheme for Compressively Sensed Signals.\n \n \n \n \n\n\n \n Yamaç, M.; Sankur, B.; and Gabbouj, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1760-1764, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553522,\n  author = {M. Yamaç and B. Sankur and M. Gabbouj},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust Data Hiding Scheme for Compressively Sensed Signals},\n  year = {2018},\n  pages = {1760-1764},\n  abstract = {We consider the problem of linear data hiding or watermark embedding directly onto compressively sensed measurements (CSMs). In our encoding and decoding scheme, we seek exact recovery of concealed data and a small reconstruction error for a sparse signal under the additive noise model. We propose an efficient Alternating Direction of Methods of Multiplier (ADMM) based decoding algorithm and we show through experimental results that proposed decoding scheme is more robust against additive noise compared to competing algorithms in the literature.},\n  keywords = {compressed sensing;data encapsulation;decoding;encoding;signal reconstruction;watermarking;compressively sensed signals;linear data hiding;decoding scheme;reconstruction error;sparse signal;additive noise model;data hiding scheme;encoding scheme;alternating direction of methods of multiplier;compressively sensed measurements;watermark embedding;Signal processing algorithms;Decoding;Watermarking;Additive noise;Encryption;Compressed sensing;Compressive Sensing;Data Hiding;Watermarking;Image Encryption;Privacy Preserving},\n  doi = {10.23919/EUSIPCO.2018.8553522},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439292.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of linear data hiding or watermark embedding directly onto compressively sensed measurements (CSMs). In our encoding and decoding scheme, we seek exact recovery of concealed data and a small reconstruction error for a sparse signal under the additive noise model. We propose an efficient Alternating Direction of Methods of Multiplier (ADMM) based decoding algorithm and we show through experimental results that proposed decoding scheme is more robust against additive noise compared to competing algorithms in the literature.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n DC-offset Estimation of Multiple CW Micro Doppler Radar.\n \n \n \n \n\n\n \n Kim, D. K.; and Kim, Y. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2400-2404, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DC-offsetPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553524,\n  author = {D. K. Kim and Y. J. Kim},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {DC-offset Estimation of Multiple CW Micro Doppler Radar},\n  year = {2018},\n  pages = {2400-2404},\n  abstract = {Dc-offset estimation of quadrature continuous-wave (CW) radar has been studied for years. Studies have shown that the estimation error increases when target movement with respect to the radar is small. This paper presents a method that uses multiple simultaneous CW frequencies for the dc-offset estimation, which makes the dc-offset estimation easy in contrast to the conventional quadrature CW radar. A dc-offset estimation method using the multiple CW frequencies is presented to demonstrate that the multiple CW frequencies provide sufficient information for the dc-offset estimation.},\n  keywords = {CW radar;Doppler radar;frequency estimation;estimation error;target movement;multiple simultaneous CW frequencies;conventional quadrature CW radar;DC-offset estimation;multiple CW microdoppler radar;quadrature continuous-wave radar;Estimation;Doppler radar;Demodulation;Frequency estimation;Receivers;Reflection;Center estimation;Micro Doppler radar;De-offset},\n  doi = {10.23919/EUSIPCO.2018.8553524},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437396.pdf},\n}\n\n
\n
\n\n\n
\n Dc-offset estimation of quadrature continuous-wave (CW) radar has been studied for years. Studies have shown that the estimation error increases when target movement with respect to the radar is small. This paper presents a method that uses multiple simultaneous CW frequencies for the dc-offset estimation, which makes the dc-offset estimation easy in contrast to the conventional quadrature CW radar. A dc-offset estimation method using the multiple CW frequencies is presented to demonstrate that the multiple CW frequencies provide sufficient information for the dc-offset estimation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Embedding Intelligent Features for Vibration-Based Machine Condition Monitoring.\n \n \n \n \n\n\n \n Reich, C.; Mansour, A.; and van Laerhoven , K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 371-375, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EmbeddingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553525,\n  author = {C. Reich and A. Mansour and K. {van Laerhoven}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Embedding Intelligent Features for Vibration-Based Machine Condition Monitoring},\n  year = {2018},\n  pages = {371-375},\n  abstract = {Today's demands regarding workpiece quality in cutting machine tool processing require automated monitoring of both machine condition and the cutting process. Currently, best-performing monitoring approaches rely on high-frequency acoustic emission (AE) sensor data and definition of advanced features, which involve complex computations. This approach is challenging for machine monitoring via embedded sensor systems with constrained computational power and energy budget. To cope with constrained energy, we rely on data recording with microelectromechanical system (MEMS) vibration sensors, which rely on lower-frequency sampling. To clarify whether these lower-frequency signals bear information for typical machine monitoring prediction tasks, we evaluate data for the most generic machine monitoring task of tool condition monitoring (TCM). To cope with computational complexity of advanced features, we introduce two intelligent preprocessing algorithms. First, we split non-stationary signals of recurrent structure into similar segments. Then, we identify most discriminative spectral differences in the segmented signals that allow for best separation of classes for the given TCM task. Subsequent feature extraction only in most relevant signal segments and spectral regions enables high expressiveness even for simple features. Extensive evaluation of the outlined approach on multiple data sets of different combinations of cutting machine tools, tool types and workpieces confirms its sensibility. Intelligent preprocessing enables reliable identification of stationary segments and most discriminative frequency bands. With subsequent extraction of simple but tailor-made features in these spectral-temporal regions of interest (Rols), TCM typically framed as multi feature classification problem can be converted to a single feature threshold comparison problem with an average F1 score of 97.89%.},\n  keywords = {computerised monitoring;condition monitoring;fault diagnosis;feature extraction;machine tools;mechanical engineering computing;signal processing;vibrations;intelligent features;vibration-based machine condition monitoring;workpiece quality;machine tool processing;cutting process;monitoring approaches;high-frequency acoustic emission sensor data;advanced features;complex computations;embedded sensor systems;constrained computational power;energy budget;constrained energy;microelectromechanical system vibration sensors;lower-frequency sampling;lower-frequency signals;typical machine monitoring prediction tasks;generic machine monitoring task;tool condition monitoring;computational complexity;intelligent preprocessing algorithms;nonstationary signals;similar segments;discriminative spectral differences;segmented signals;subsequent feature extraction;relevant signal segments;simple features;outlined approach;multiple data sets;machine tools;stationary segments;discriminative frequency bands;TCM typically framed;multifeature classification problem;single feature threshold comparison problem;TCM task;Feature extraction;Wheels;Hidden Markov models;Monitoring;Signal processing algorithms;Task analysis;Clustering algorithms;Tool condition monitoring;non-stationary signals;segmentation;mixture model;Internet of things (IoT)},\n  doi = {10.23919/EUSIPCO.2018.8553525},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437158.pdf},\n}\n\n
\n
\n\n\n
\n Today's demands regarding workpiece quality in cutting machine tool processing require automated monitoring of both machine condition and the cutting process. Currently, best-performing monitoring approaches rely on high-frequency acoustic emission (AE) sensor data and definition of advanced features, which involve complex computations. This approach is challenging for machine monitoring via embedded sensor systems with constrained computational power and energy budget. To cope with constrained energy, we rely on data recording with microelectromechanical system (MEMS) vibration sensors, which rely on lower-frequency sampling. To clarify whether these lower-frequency signals bear information for typical machine monitoring prediction tasks, we evaluate data for the most generic machine monitoring task of tool condition monitoring (TCM). To cope with computational complexity of advanced features, we introduce two intelligent preprocessing algorithms. First, we split non-stationary signals of recurrent structure into similar segments. Then, we identify most discriminative spectral differences in the segmented signals that allow for best separation of classes for the given TCM task. Subsequent feature extraction only in most relevant signal segments and spectral regions enables high expressiveness even for simple features. Extensive evaluation of the outlined approach on multiple data sets of different combinations of cutting machine tools, tool types and workpieces confirms its sensibility. Intelligent preprocessing enables reliable identification of stationary segments and most discriminative frequency bands. With subsequent extraction of simple but tailor-made features in these spectral-temporal regions of interest (Rols), TCM typically framed as multi feature classification problem can be converted to a single feature threshold comparison problem with an average F1 score of 97.89%.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Refined Generalized Multivariate Multiscale Fuzzy Entropy: A Preliminary Study on Multichannel Physiological Complexity During Postural Changes.\n \n \n \n \n\n\n \n Nardelli, M.; Scilingo, E. P.; and Valenza, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 291-295, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RefinedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553527,\n  author = {M. Nardelli and E. P. Scilingo and G. Valenza},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Refined Generalized Multivariate Multiscale Fuzzy Entropy: A Preliminary Study on Multichannel Physiological Complexity During Postural Changes},\n  year = {2018},\n  pages = {291-295},\n  abstract = {We propose a novel approach to characterize the complexity of multivariate physiological processes over multiple time scales, which hereinafter we call Refined Generalized Multivariate Multiscale Fuzzy Entropy (ReGeM-MFE). In this preliminary study, we evaluate the effectiveness of this methodology in discerning different levels of complexity in Autonomic Nervous System (ANS) dynamics during active stand-up, considering a bivariate process comprising heart rate variability and blood pressure variability series. Results show that, using mean-and variance-based ReGeM-MFE throughout different coarse-graining steps, it is possible to statistically discern the resting and stand-up conditions. Compared with the previously proposed Refined Composite Multivariate Multiscale Fuzzy Entropy, we demonstrate that the proposed ReGeM-MFE consistently out-performs this metrics.},\n  keywords = {blood pressure measurement;electrocardiography;entropy;fuzzy set theory;medical signal processing;neurophysiology;bivariate process;active stand-up;postural changes;refined generalized multivariate multiscale fuzzy entropy;heart rate variability;Autonomic Nervous System dynamics;multiple time scales;multivariate physiological processes;multichannel physiological complexity;ReGeM-MFE consistently out-performs this metrics;Refined Composite Multivariate Multiscale Fuzzy Entropy;variance-based ReGeM-MFE;blood pressure variability series;Entropy;Complexity theory;Physiology;Time series analysis;Standards;Biomedical monitoring;Heart rate variability;Complexity;Multivariate Multiscale Entropy;Generalized Multiscale Entropy;Fuzzy Entropy;Autonomic Nervous System;Heart Rate Variability;Blood Pressure},\n  doi = {10.23919/EUSIPCO.2018.8553527},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437021.pdf},\n}\n\n
\n
\n\n\n
\n We propose a novel approach to characterize the complexity of multivariate physiological processes over multiple time scales, which hereinafter we call Refined Generalized Multivariate Multiscale Fuzzy Entropy (ReGeM-MFE). In this preliminary study, we evaluate the effectiveness of this methodology in discerning different levels of complexity in Autonomic Nervous System (ANS) dynamics during active stand-up, considering a bivariate process comprising heart rate variability and blood pressure variability series. Results show that, using mean-and variance-based ReGeM-MFE throughout different coarse-graining steps, it is possible to statistically discern the resting and stand-up conditions. Compared with the previously proposed Refined Composite Multivariate Multiscale Fuzzy Entropy, we demonstrate that the proposed ReGeM-MFE consistently out-performs this metrics.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Regularizing Autoencoder-Based Matrix Completion Models via Manifold Learning.\n \n \n \n \n\n\n \n Nguyen, D. M.; Tsiligianni, E.; Calderbank, R.; and Deligiannis, N.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1880-1884, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RegularizingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553528,\n  author = {D. M. Nguyen and E. Tsiligianni and R. Calderbank and N. Deligiannis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Regularizing Autoencoder-Based Matrix Completion Models via Manifold Learning},\n  year = {2018},\n  pages = {1880-1884},\n  abstract = {Autoencoders are popular among neural-network-based matrix completion models due to their ability to retrieve potential latent factors from the partially observed matrices. Nevertheless, when training data is scarce their performance is significantly degraded due to overfitting. In this paper, we mitigate overfitting with a data-dependent regularization technique that relies on the principles of multi-task learning. Specifically, we propose an autoencoder-based matrix completion model that performs prediction of the unknown matrix values as a main task, and manifold learning as an auxiliary task. The latter acts as an inductive bias, leading to solutions that generalize better. The proposed model outperforms the existing autoencoder-based models designed for matrix completion, achieving high reconstruction accuracy in well-known datasets.},\n  keywords = {learning (artificial intelligence);least squares approximations;matrix algebra;neural nets;existing autoencoder-based models;unknown matrix values;multitask learning;data-dependent regularization technique;neural-network-based matrix completion models;autoencoder-based matrix completion model;manifold learning;Task analysis;Manifolds;Neural networks;Sparse matrices;Decoding;Signal processing;Training;matrix completion;deep neural network;auto encoder;multi-task learning;regularization},\n  doi = {10.23919/EUSIPCO.2018.8553528},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437286.pdf},\n}\n\n
\n
\n\n\n
\n Autoencoders are popular among neural-network-based matrix completion models due to their ability to retrieve potential latent factors from the partially observed matrices. Nevertheless, when training data is scarce their performance is significantly degraded due to overfitting. In this paper, we mitigate overfitting with a data-dependent regularization technique that relies on the principles of multi-task learning. Specifically, we propose an autoencoder-based matrix completion model that performs prediction of the unknown matrix values as a main task, and manifold learning as an auxiliary task. The latter acts as an inductive bias, leading to solutions that generalize better. The proposed model outperforms the existing autoencoder-based models designed for matrix completion, achieving high reconstruction accuracy in well-known datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Adaptive Guided Upsampling For Color Image Demosaicking.\n \n \n \n \n\n\n \n Ueki, Y.; Yamaguchi, T.; and Ikehara, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2240-2244, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AdaptivePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553529,\n  author = {Y. Ueki and T. Yamaguchi and M. Ikehara},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Adaptive Guided Upsampling For Color Image Demosaicking},\n  year = {2018},\n  pages = {2240-2244},\n  abstract = {Color demosaicking is a significant step that enables digital cameras to recover full pixels from a raw image. In this paper, we propose an efficient color image demosaicking model based on guided upsampling. Guided upsampling using residual interpolation has been prominent in comparison to other methods. In the present study, we developed an adaptive guided upsampling method without RI. Our approach consists of adaptive guided upsampling and color-component difference interpolation. We utilize color difference interpolation and guided upsampling for its preprocessing in the G channel interpolation and interpolate the R and B channel with guided upsampling using recovered G channel. The results of the experiments prove that our approach is superior to RI-based methods on both objective and subjective performance for the Kodak and the IMAX datasets.},\n  keywords = {cameras;image colour analysis;image segmentation;interpolation;sampling methods;residual interpolation;adaptive guided upsampling method;color-component difference interpolation;pixels recovery;G channel interpolation;digital cameras;color image demosaicking model;Interpolation;Image color analysis;Color;Laplace equations;Signal processing algorithms;Europe;Signal processing;Demosaicking;guided filter;Bayer color filter array;image processing},\n  doi = {10.23919/EUSIPCO.2018.8553529},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437633.pdf},\n}\n\n
\n
\n\n\n
\n Color demosaicking is a significant step that enables digital cameras to recover full pixels from a raw image. In this paper, we propose an efficient color image demosaicking model based on guided upsampling. Guided upsampling using residual interpolation has been prominent in comparison to other methods. In the present study, we developed an adaptive guided upsampling method without RI. Our approach consists of adaptive guided upsampling and color-component difference interpolation. We utilize color difference interpolation and guided upsampling for its preprocessing in the G channel interpolation and interpolate the R and B channel with guided upsampling using recovered G channel. The results of the experiments prove that our approach is superior to RI-based methods on both objective and subjective performance for the Kodak and the IMAX datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Classification by Re-generation: Towards Classification Based on Variational Inference.\n \n \n \n \n\n\n \n Rezaeifar, S.; Taran, O.; and Voloshynovskiy, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2005-2009, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ClassificationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553530,\n  author = {S. Rezaeifar and O. Taran and S. Voloshynovskiy},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Classification by Re-generation: Towards Classification Based on Variational Inference},\n  year = {2018},\n  pages = {2005-2009},\n  abstract = {As Deep Neural Networks (DNNs) are considered the state-of-the-art in many classification tasks, the question of their semantic generalizations has been raised. To address semantic interpretability of learned features, we introduce a novel idea of classification by re-generation based on variational autoencoder (VAE) in which a separate encoder-decoder pair of VAE is trained for each class. Moreover, the proposed architecture overcomes the scalability issue in current DNN networks as there is no need to re-train the whole network with the addition of new classes and it can be done for each class separately. We also introduce a criterion based on Kullback-Leibler divergence to reject doubtful examples. This rejection criterion should improve the trust in the obtained results and can be further exploited to reject adversarial examples.},\n  keywords = {encoding;learning (artificial intelligence);neural nets;pattern classification;deep neural networks;Kullback-Leibler divergence;current DNN networks;scalability issue;encoder-decoder pair;VAE;variational autoencoder;learned features;semantic interpretability;semantic generalizations;classification tasks;variational inference;Probes;Training;Scalability;Image reconstruction;Semantics;Europe;Signal processing;classification;variational auto encoder;regeneration;rejection},\n  doi = {10.23919/EUSIPCO.2018.8553530},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437363.pdf},\n}\n\n
\n
\n\n\n
\n As Deep Neural Networks (DNNs) are considered the state-of-the-art in many classification tasks, the question of their semantic generalizations has been raised. To address semantic interpretability of learned features, we introduce a novel idea of classification by re-generation based on variational autoencoder (VAE) in which a separate encoder-decoder pair of VAE is trained for each class. Moreover, the proposed architecture overcomes the scalability issue in current DNN networks as there is no need to re-train the whole network with the addition of new classes and it can be done for each class separately. We also introduce a criterion based on Kullback-Leibler divergence to reject doubtful examples. This rejection criterion should improve the trust in the obtained results and can be further exploited to reject adversarial examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Simultaneous Measurement of Spatial Room Impulse Responses from Multiple Sound Sources Using a Continuously Moving Microphone.\n \n \n \n \n\n\n \n Hahn, N.; and Spors, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2180-2184, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SimultaneousPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553532,\n  author = {N. Hahn and S. Spors},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Simultaneous Measurement of Spatial Room Impulse Responses from Multiple Sound Sources Using a Continuously Moving Microphone},\n  year = {2018},\n  pages = {2180-2184},\n  abstract = {Continuous measurement techniques aim at identifying a large number of impulse responses from a signal captured by a moving microphone. In a recently proposed method, each sample of the captured signal is interpreted as a spatio-temporal sample of the sound field, and the individual impulse responses are computed by means of spatial interpolation. In the present study, the approach is extended towards multichannel cases. The superimposed sound field reproduced by multiple sources is recorded with one microphone, and the individual impulse responses are identified. To this end, the sound sources are excited with the same periodic perfect sequence, but a different amount of temporal shift is applied so that the identified impulse responses of different sources do not overlap. An anti-aliasing condition for the microphone speed is derived which is computed based on the spatial bandwidth of the sound field.},\n  keywords = {acoustic field;microphones;signal sampling;sound reproduction;transient response;spatial interpolation;superimposed sound field;multiple sources;microphone speed;spatial bandwidth;spatial room impulse responses;multiple sound sources;continuously moving microphone;continuous measurement techniques;captured signal;spatio-temporal sample;Microphones;Interpolation;Europe;Signal processing;Loudspeakers;Receivers;Bandwidth},\n  doi = {10.23919/EUSIPCO.2018.8553532},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437100.pdf},\n}\n\n
\n
\n\n\n
\n Continuous measurement techniques aim at identifying a large number of impulse responses from a signal captured by a moving microphone. In a recently proposed method, each sample of the captured signal is interpreted as a spatio-temporal sample of the sound field, and the individual impulse responses are computed by means of spatial interpolation. In the present study, the approach is extended towards multichannel cases. The superimposed sound field reproduced by multiple sources is recorded with one microphone, and the individual impulse responses are identified. To this end, the sound sources are excited with the same periodic perfect sequence, but a different amount of temporal shift is applied so that the identified impulse responses of different sources do not overlap. An anti-aliasing condition for the microphone speed is derived which is computed based on the spatial bandwidth of the sound field.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Radio Imaging with Information Field Theory.\n \n \n \n \n\n\n \n Arras, P.; Knollrnüller, J.; Junklewitz, H.; and Enßlin, T. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2683-2687, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RadioPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553533,\n  author = {P. Arras and J. Knollrnüller and H. Junklewitz and T. A. Enßlin},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Radio Imaging with Information Field Theory},\n  year = {2018},\n  pages = {2683-2687},\n  abstract = {Data from radio interferometers provide a substantial challenge for statisticians. It is incomplete, noise-dominated and originates from a non-trivial measurement process. The signal is not only corrupted by imperfect measurement devices but also from effects like fluctuations in the ionosphere that act as a distortion screen. In this paper we focus on the imaging part of data reduction in radio astronomy and present RESOLVE, a Bayesian imaging algorithm for radio interferometry in its new incarnation. It is formulated in the language of information field theory. Solely by algorithmic advances the inference could be speed up significantly and behaves noticeably more stable now. This is one more step towards a fully user-friendly version of RESOLVE which can be applied routinely by astronomers.},\n  keywords = {Bayes methods;radioastronomical techniques;radioastronomy;radiowave interferometry;radio interferometry;Bayesian imaging algorithm;RESOLVE;radio astronomy;data reduction;distortion screen;imperfect measurement devices;nontrivial measurement process;radio interferometers;information field theory;radio imaging;Extraterrestrial measurements;Interferometers;Antennas;Radio astronomy;Antenna measurements;Europe;Imaging},\n  doi = {10.23919/EUSIPCO.2018.8553533},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437346.pdf},\n}\n\n
\n
\n\n\n
\n Data from radio interferometers provide a substantial challenge for statisticians. It is incomplete, noise-dominated and originates from a non-trivial measurement process. The signal is not only corrupted by imperfect measurement devices but also from effects like fluctuations in the ionosphere that act as a distortion screen. In this paper we focus on the imaging part of data reduction in radio astronomy and present RESOLVE, a Bayesian imaging algorithm for radio interferometry in its new incarnation. It is formulated in the language of information field theory. Solely by algorithmic advances the inference could be speed up significantly and behaves noticeably more stable now. This is one more step towards a fully user-friendly version of RESOLVE which can be applied routinely by astronomers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Velocity Variability in MRI Phase-Contrast.\n \n \n \n \n\n\n \n Firoozabadi, A. D.; Irarrazaval, P.; Uribe, S.; Tejos, C.; and Sing-Long, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 31-35, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"VelocityPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553535,\n  author = {A. D. Firoozabadi and P. Irarrazaval and S. Uribe and C. Tejos and C. Sing-Long},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Velocity Variability in MRI Phase-Contrast},\n  year = {2018},\n  pages = {31-35},\n  abstract = {MRI phase contrast is a well known technique for computing the average velocity associated to each pixel. In this work, we calculate the exact probability distribution function for the velocity given the noise in the signal. This pdf is not necessarily Gaussian, particularly for low Signal-to-Noise ratio. We first find the pdf of the signals phase, assuming Gaussian noise in the real and imaginary channels of the signal. The pdf of the velocity is then the convolution of the phases pdfs. To confirm this, we measure several times the velocity in a flow phantom and compare the empirical histogram with the theoretical pdf. We also acquire the velocity from a volunteers aorta using a standard protocol for 4D Flow and multiple coils. Based on this noise characterization, we also propose an optimal weighing for combining multiple coils which is not based only on the coil sensitivities.},\n  keywords = {biomedical MRI;Gaussian distribution;Gaussian noise;medical image processing;phantoms;Gaussian noise;noise characterization;velocity variability;MRI phase-contrast;average velocity;exact probability distribution function;low Signal-to-Noise ratio;signals phase;flow phantom;empirical histogram;aorta;standard protocol;4D flow;multiple coils;optimal weighing;coil sensitivity;Standards;Probability density function;Coils;Magnetic resonance imaging;Signal to noise ratio;Encoding;Probability distribution;Flow MRI;Phase contrast velocity;Ascending and descending aorta},\n  doi = {10.23919/EUSIPCO.2018.8553535},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434426.pdf},\n}\n\n
\n
\n\n\n
\n MRI phase contrast is a well known technique for computing the average velocity associated to each pixel. In this work, we calculate the exact probability distribution function for the velocity given the noise in the signal. This pdf is not necessarily Gaussian, particularly for low Signal-to-Noise ratio. We first find the pdf of the signals phase, assuming Gaussian noise in the real and imaginary channels of the signal. The pdf of the velocity is then the convolution of the phases pdfs. To confirm this, we measure several times the velocity in a flow phantom and compare the empirical histogram with the theoretical pdf. We also acquire the velocity from a volunteers aorta using a standard protocol for 4D Flow and multiple coils. Based on this noise characterization, we also propose an optimal weighing for combining multiple coils which is not based only on the coil sensitivities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Range Estimation from Single-Photon Lidar Data Using a Stochastic Em Approach.\n \n \n \n \n\n\n \n Altmann, Y.; and McLaughlin, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1112-1116, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RangePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553536,\n  author = {Y. Altmann and S. McLaughlin},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Range Estimation from Single-Photon Lidar Data Using a Stochastic Em Approach},\n  year = {2018},\n  pages = {1112-1116},\n  abstract = {This paper addresses the problem of estimating range profiles from single-photon waveforms in the photon-starved regime, with a background illumination both high and unknown a priori such that the influence of nuisance photons cannot be neglected. We reformulate the classical observation model into a new mixture model, adopt a Bayesian approach and assign prior distributions to the unknown model parameters. First, the range profile of interest is marginalised from the Bayesian model to estimate the remaining model parameters (considered as nuisance parameters) using a stochastic EM algorithm. The range profile is then estimated via Monte Carlo simulation, conditioned on the previously estimated nuisance parameters. Results of simulations conducted with controlled data demonstrate the possibility to maintain satisfactory range estimation performance in high-background scenarios with less than 10 signal photons per pixel on average.},\n  keywords = {Bayes methods;mixture models;Monte Carlo methods;optical radar;parameter estimation;single-photon lidar data;single-photon waveforms;photon-starved regime;background illumination;nuisance photons;mixture model;unknown model parameters;Bayesian model;stochastic EM algorithm;estimated nuisance parameters;satisfactory range estimation performance;high-background scenarios;Bayesian approach;Monte Carlo simulation;Photonics;Estimation;Laser radar;Bayes methods;Surface emitting lasers;Signal processing algorithms;Computational modeling;Single-photon Lidar;Bayesian estimation;mixture model;Stochastic Expectation-Maximisation},\n  doi = {10.23919/EUSIPCO.2018.8553536},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439171.pdf},\n}\n\n
\n
\n\n\n
\n This paper addresses the problem of estimating range profiles from single-photon waveforms in the photon-starved regime, with a background illumination both high and unknown a priori such that the influence of nuisance photons cannot be neglected. We reformulate the classical observation model into a new mixture model, adopt a Bayesian approach and assign prior distributions to the unknown model parameters. First, the range profile of interest is marginalised from the Bayesian model to estimate the remaining model parameters (considered as nuisance parameters) using a stochastic EM algorithm. The range profile is then estimated via Monte Carlo simulation, conditioned on the previously estimated nuisance parameters. Results of simulations conducted with controlled data demonstrate the possibility to maintain satisfactory range estimation performance in high-background scenarios with less than 10 signal photons per pixel on average.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Framework for Assessing Factors Influencing User Interaction for Touch-based Biometrics.\n \n \n \n \n\n\n \n Ellavarason, E.; Guest, R.; and Deravi, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 553-557, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553537,\n  author = {E. Ellavarason and R. Guest and F. Deravi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Framework for Assessing Factors Influencing User Interaction for Touch-based Biometrics},\n  year = {2018},\n  pages = {553-557},\n  abstract = {Touch-based behavioural biometrics is an emerging technique for passive and transparent user authentication on mobile devices. It utilises dynamics mined from users' touch actions to model behaviour. The interaction of the user with the mobile device using touch is an important aspect to investigate as the interaction errors can influence the stability of sample donation and overall performance of the implemented biometric authentication system. In this paper, we are outlining a data collection framework for touch-based behavioural biometric modalities (signature, swipe and keystroke dynamics) that will enable us to study the influence of environmental conditions and body movement on the touch-interaction. In order to achieve this, we have designed a multi-modal behavioural biometric data capturing application “Touchlogger” that logs touch actions exhibited by the user on the mobile device. The novelty of our framework lies in the collection of users' touch data under various usage scenarios and environmental conditions. We aim to collect touch data in two different environments - indoors and outdoors, along with different usage scenarios - whilst the user is seated at a desk, walking on a treadmill, walking outdoors and seated on a bus. The range of collected data may include swiping, signatures using finger and stylus, alphabetic, numeric keystroke data and writing patterns using a stylus.},\n  keywords = {authorisation;biometrics (access control);message authentication;mobile computing;touch sensitive screens;user interfaces;transparent user authentication;mobile device;users;interaction errors;data collection framework;touch-based behavioural biometric modalities;swipe;keystroke dynamics;environmental conditions;touch-interaction;multimodal behavioural biometric data capturing application Touchlogger;touch data;alphabetic keystroke data;numeric keystroke data;writing patterns;touch-based behavioural biometrics;passive user authentication;biometric authentication system;user interaction;Mobile handsets;Legged locomotion;Performance evaluation;Biometrics (access control);Authentication;Fingers;Task analysis;Mobile Biometrics;Touch-dynamics;Behavioural Biometrics;User Interaction;Usability},\n  doi = {10.23919/EUSIPCO.2018.8553537},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438067.pdf},\n}\n\n
\n
\n\n\n
\n Touch-based behavioural biometrics is an emerging technique for passive and transparent user authentication on mobile devices. It utilises dynamics mined from users' touch actions to model behaviour. The interaction of the user with the mobile device using touch is an important aspect to investigate as the interaction errors can influence the stability of sample donation and overall performance of the implemented biometric authentication system. In this paper, we are outlining a data collection framework for touch-based behavioural biometric modalities (signature, swipe and keystroke dynamics) that will enable us to study the influence of environmental conditions and body movement on the touch-interaction. In order to achieve this, we have designed a multi-modal behavioural biometric data capturing application “Touchlogger” that logs touch actions exhibited by the user on the mobile device. The novelty of our framework lies in the collection of users' touch data under various usage scenarios and environmental conditions. We aim to collect touch data in two different environments - indoors and outdoors, along with different usage scenarios - whilst the user is seated at a desk, walking on a treadmill, walking outdoors and seated on a bus. The range of collected data may include swiping, signatures using finger and stylus, alphabetic, numeric keystroke data and writing patterns using a stylus.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Blind Beamforming Technique for the Alignment and Enhancement of Seismic Signals.\n \n \n \n \n\n\n \n Pikoulis, E.; and Psarakis, E. Z.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2385-2389, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553538,\n  author = {E. Pikoulis and E. Z. Psarakis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Blind Beamforming Technique for the Alignment and Enhancement of Seismic Signals},\n  year = {2018},\n  pages = {2385-2389},\n  abstract = {Blind beamforming constitutes a unified framework for the solution of two very important problems in seismological applications involving ensembles of similar signals, namely the signal alignment and signal enhancement problems. The former problem translates into the estimation of the time delays that exist between the signals, while the second problem deals with the optimal weighting of the signals, so that the SNR of their weighted average is maximized. A global optimization technique for the solution of the alignment problem with a sample-level accuracy, is proposed in this manuscript. The sample-level alignment problem is formulated as a combinatorial optimization problem and an approximate solution is proposed by using the technique of SDP relaxation. Finally, the signal enhancement problem is formulated as a quadratic maximization problem which in the vast majority of cases has an analytical solution, while in more challenging conditions can be approximately solved via SDP relaxation. The superior performance of the technique compared to other similar approaches is demonstrated through a number experiments involving numerical simulations with several signal and noise models.},\n  keywords = {array signal processing;optimisation;noise models;seismic signals;similar signals;signal alignment;signal enhancement problems;optimal weighting;global optimization technique;sample-level accuracy;sample-level alignment problem;combinatorial optimization problem;approximate solution;SDP relaxation;signal enhancement problem;quadratic maximization problem;analytical solution;blind beamforming technique;Array signal processing;Signal to noise ratio;Delays;Optimization;Europe;Estimation},\n  doi = {10.23919/EUSIPCO.2018.8553538},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436743.pdf},\n}\n\n
\n
\n\n\n
\n Blind beamforming constitutes a unified framework for the solution of two very important problems in seismological applications involving ensembles of similar signals, namely the signal alignment and signal enhancement problems. The former problem translates into the estimation of the time delays that exist between the signals, while the second problem deals with the optimal weighting of the signals, so that the SNR of their weighted average is maximized. A global optimization technique for the solution of the alignment problem with a sample-level accuracy, is proposed in this manuscript. The sample-level alignment problem is formulated as a combinatorial optimization problem and an approximate solution is proposed by using the technique of SDP relaxation. Finally, the signal enhancement problem is formulated as a quadratic maximization problem which in the vast majority of cases has an analytical solution, while in more challenging conditions can be approximately solved via SDP relaxation. The superior performance of the technique compared to other similar approaches is demonstrated through a number experiments involving numerical simulations with several signal and noise models.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perceived quality of audio-visual stimuli containing streaming audio degradations.\n \n \n \n \n\n\n \n Martinez, H.; Farias, M. C. Q.; and Hines, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2529-2533, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PerceivedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553541,\n  author = {H. Martinez and M. C. Q. Farias and A. Hines},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Perceived quality of audio-visual stimuli containing streaming audio degradations},\n  year = {2018},\n  pages = {2529-2533},\n  abstract = {Multimedia services play an important role in modern human communication. Understanding the impact of multisensory input (audio and video) on perceived quality is important for optimizing the delivery of these services. This work explores the impact of audio degradations on audio-visual quality. With this goal, we present a new dataset that contains audio-visual sequences with distortions only in the audio component (Im-AV-Exp2). The degradations in this new dataset correspond to commonly encountered streaming degradations, matching those found in the audio-only TCD-VoIP dataset. Using the Immersive Methodology, we perform a subjective experiment with the Im-AV-Exp2 dataset. We analyze the experimental data and compared the quality scores of the Im-AV-Exp2 and TCD-VoIP datasets. Results show that the video component act as a masking factor for certain classes of audio degradations (e.g. echo), showing that there is an interaction of video and audio quality that may depend on content.},\n  keywords = {audio signal processing;audio-visual systems;Internet telephony;multimedia communication;video streaming;perceived quality;audio-visual stimuli;audio degradations;multimedia services;audio-visual quality;audio-visual sequences;audio-only TCD-VoIP dataset;Im-AV-Exp2 dataset;quality scores;audio quality;human communication;audio component;streaming degradations;immersive methodology;video component;Degradation;Streaming media;Distortion;Noise measurement;Media;Hardware;Monitoring;QoE;audio-visual quality;VoIP;immersive experimental methodology},\n  doi = {10.23919/EUSIPCO.2018.8553541},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439306.pdf},\n}\n\n
\n
\n\n\n
\n Multimedia services play an important role in modern human communication. Understanding the impact of multisensory input (audio and video) on perceived quality is important for optimizing the delivery of these services. This work explores the impact of audio degradations on audio-visual quality. With this goal, we present a new dataset that contains audio-visual sequences with distortions only in the audio component (Im-AV-Exp2). The degradations in this new dataset correspond to commonly encountered streaming degradations, matching those found in the audio-only TCD-VoIP dataset. Using the Immersive Methodology, we perform a subjective experiment with the Im-AV-Exp2 dataset. We analyze the experimental data and compared the quality scores of the Im-AV-Exp2 and TCD-VoIP datasets. Results show that the video component act as a masking factor for certain classes of audio degradations (e.g. echo), showing that there is an interaction of video and audio quality that may depend on content.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Mosquito wingbeat analysis and classification using deep learning.\n \n \n \n \n\n\n \n Fanioudakis, E.; Geismar, M.; and Potamitis, I.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2410-2414, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MosquitoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553542,\n  author = {E. Fanioudakis and M. Geismar and I. Potamitis},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Mosquito wingbeat analysis and classification using deep learning},\n  year = {2018},\n  pages = {2410-2414},\n  abstract = {We examine the signal and the attributes of mosquitoes' wingbeat. Subsequently we carryon large-scale classification experiments based on optical recordings of mosquitoes' wingbeat of the following species: Aedesaegypti, Aedes albopictus, Anopheles arabiensis, Anopheles gambiae, Culex pipiens, Culex quinquefasciatus. We report 96% classification accuracy on the species level for a database of 279,566 flight recording cases using top-tier deep learning techniques. The database and the associated code are offered open. The longstanding goal is to run prediction models, perform risk assessments, issue warnings and make historical analysis based on wingbeats acquired through suction traps deployed in the field.},\n  keywords = {biology computing;learning (artificial intelligence);pattern classification;zoology;Aedes albopictus;Anopheles arabiensis;Anopheles gambiae;Culex pipiens;Culex quinquefasciatus;mosquito wingbeat analysis;mosquito wingbeat classification;top-tier deep learning;Spectrogram;Insects;Databases;Harmonic analysis;Power harmonic filters;Frequency modulation;wingbeat;smart traps;deep learning;Culex},\n  doi = {10.23919/EUSIPCO.2018.8553542},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439278.pdf},\n}\n\n
\n
\n\n\n
\n We examine the signal and the attributes of mosquitoes' wingbeat. Subsequently we carryon large-scale classification experiments based on optical recordings of mosquitoes' wingbeat of the following species: Aedesaegypti, Aedes albopictus, Anopheles arabiensis, Anopheles gambiae, Culex pipiens, Culex quinquefasciatus. We report 96% classification accuracy on the species level for a database of 279,566 flight recording cases using top-tier deep learning techniques. The database and the associated code are offered open. The longstanding goal is to run prediction models, perform risk assessments, issue warnings and make historical analysis based on wingbeats acquired through suction traps deployed in the field.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Bayesian Calibration Using Different Prior Distributions: An Iterative Maximum A Posteriori Approach for Radio Interferometers.\n \n \n \n \n\n\n \n Ollier, V.; El Korso, M. N.; Ferrari, A.; Boyer, R.; and Larzabal, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2673-2677, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BayesianPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553543,\n  author = {V. Ollier and M. N. {El Korso} and A. Ferrari and R. Boyer and P. Larzabal},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Bayesian Calibration Using Different Prior Distributions: An Iterative Maximum A Posteriori Approach for Radio Interferometers},\n  year = {2018},\n  pages = {2673-2677},\n  abstract = {In this paper, we aim to design robust estimation techniques based on the compound-Gaussian (CG) process and adapted for calibration of radio interferometers. The motivation beyond this is due to the presence of outliers leading to an unrealistic traditional Gaussian noise assumption. Consequently, to achieve robustness, we adopt a maximum a posteriori (MAP) approach which exploits Bayesian statistics and follows a sequential updating procedure here. The proposed algorithm is applied in a multi-frequency scenario in order to enhance the estimation and correction of perturbation effects. Numerical simulations assess the performance of the proposed algorithm for different noise models, Student's t, K, Laplace, Cauchy and inverse-Gaussian compound-Gaussian distributions w. r. t. the classical non-robust Gaussian noise assumption.},\n  keywords = {Bayes methods;calibration;Gaussian distribution;Gaussian noise;Gaussian processes;iterative methods;maximum likelihood estimation;radiowave interferometers;noise models;K distribution;Student's t distribution;Laplace distribution;Cauchy distribution;maximum a posteriori approach;radio interferometers calibration;nonrobust Gaussian noise assumption;inverse-Gaussian compound-Gaussian distributions;multifrequency scenario;sequential updating procedure;Bayesian statistics;compound-Gaussian process;robust estimation techniques;iterative maximum;Bayesian calibration;Calibration;Estimation;Signal processing;Bayes methods;Perturbation methods;Europe;Iterative methods;Bayesian calibration;compound-Gaussian distribution;robustness;maximum a posteriori estimation},\n  doi = {10.23919/EUSIPCO.2018.8553543},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437068.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we aim to design robust estimation techniques based on the compound-Gaussian (CG) process and adapted for calibration of radio interferometers. The motivation beyond this is due to the presence of outliers leading to an unrealistic traditional Gaussian noise assumption. Consequently, to achieve robustness, we adopt a maximum a posteriori (MAP) approach which exploits Bayesian statistics and follows a sequential updating procedure here. The proposed algorithm is applied in a multi-frequency scenario in order to enhance the estimation and correction of perturbation effects. Numerical simulations assess the performance of the proposed algorithm for different noise models, Student's t, K, Laplace, Cauchy and inverse-Gaussian compound-Gaussian distributions w. r. t. the classical non-robust Gaussian noise assumption.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Deep Reinforcement Learning Approach for Early Classification of Time Series.\n \n \n \n \n\n\n \n Martinez, C.; Perrin, G.; Ramasso, E.; and Rombaut, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2030-2034, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553544,\n  author = {C. Martinez and G. Perrin and E. Ramasso and M. Rombaut},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Deep Reinforcement Learning Approach for Early Classification of Time Series},\n  year = {2018},\n  pages = {2030-2034},\n  abstract = {In many real-world applications, ranging from predictive maintenance to personalized medicine, early classification of time series data is of paramount importance for supporting decision makers. In this article, we address this challenging task with a novel approach based on reinforcement learning. We introduce an early classifier agent, an end-to-end reinforcement learning agent (deep Q-network, DQN) [1] able to perform early classification in an efficient way. We formulate the early classification problem in a reinforcement learning framework: we introduce a suitable set of states and actions but we also define a specific reward function which aims at finding a compromise between earliness and classification accuracy. While most of the existing solutions do not explicitly take time into account in the final decision, this solution allows the user to set this trade-off in a more flexible way. In particular, we show experimentally on datasets from the UCR time series archive [2] that this agent is able to continually adapt its behavior without human intervention and progressively learn to compromise between accurate and fast predictions.},\n  keywords = {learning (artificial intelligence);pattern classification;time series;UCR time series archive;deep reinforcement learning approach;early classifier agent;end-to-end reinforcement learning agent;deep Q-network;early classification problem;time series data classification;DQN;reward function;Time series analysis;Approximation algorithms;Europe;Signal processing;Task analysis;Prediction algorithms;time series;early classification;reinforcement learning;Deep Q-Network;time sensitive applications},\n  doi = {10.23919/EUSIPCO.2018.8553544},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433964.pdf},\n}\n\n
\n
\n\n\n
\n In many real-world applications, ranging from predictive maintenance to personalized medicine, early classification of time series data is of paramount importance for supporting decision makers. In this article, we address this challenging task with a novel approach based on reinforcement learning. We introduce an early classifier agent, an end-to-end reinforcement learning agent (deep Q-network, DQN) [1] able to perform early classification in an efficient way. We formulate the early classification problem in a reinforcement learning framework: we introduce a suitable set of states and actions but we also define a specific reward function which aims at finding a compromise between earliness and classification accuracy. While most of the existing solutions do not explicitly take time into account in the final decision, this solution allows the user to set this trade-off in a more flexible way. In particular, we show experimentally on datasets from the UCR time series archive [2] that this agent is able to continually adapt its behavior without human intervention and progressively learn to compromise between accurate and fast predictions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Satellite Image Segmentation with Deep Residual Architectures for Time-Critical Applications.\n \n \n \n \n\n\n \n Ghassemi, S.; Sandu, C.; Fiandrotti, A.; Tonolo, F. G.; Boccardo, P.; Francini, G.; and Magli, E.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2235-2239, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SatellitePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553545,\n  author = {S. Ghassemi and C. Sandu and A. Fiandrotti and F. G. Tonolo and P. Boccardo and G. Francini and E. Magli},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Satellite Image Segmentation with Deep Residual Architectures for Time-Critical Applications},\n  year = {2018},\n  pages = {2235-2239},\n  abstract = {We address the problem of training a convolutional neural network for satellite images segmentation in emergency situations, where response time constraints prevent training the network from scratch. Such case is particularly challenging due to the large intra-class statistics variations between training images and images to be segmented captured at different locations by different sensors. We propose a convolutional encoder-decoder network architecture where the encoder builds upon a residual architecture. We show that our proposed architecture enables learning features suitable to generalize the learning process across images with different statistics. Our architecture can accurately segment images that have no reference in the training set, whereas a minimal refinement of the trained network significantly boosts the segmentation accuracy.},\n  keywords = {feedforward neural nets;image coding;image segmentation;learning (artificial intelligence);statistical analysis;satellite image segmentation;deep residual architectures;time-critical applications;convolutional neural network;response time constraints;training images;convolutional encoder-decoder network architecture;residual architecture;segment images;intraclass statistics variations;learning process;Training;Image segmentation;Convolution;Satellites;Decoding;Feature extraction;Network architecture},\n  doi = {10.23919/EUSIPCO.2018.8553545},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437242.pdf},\n}\n\n
\n
\n\n\n
\n We address the problem of training a convolutional neural network for satellite images segmentation in emergency situations, where response time constraints prevent training the network from scratch. Such case is particularly challenging due to the large intra-class statistics variations between training images and images to be segmented captured at different locations by different sensors. We propose a convolutional encoder-decoder network architecture where the encoder builds upon a residual architecture. We show that our proposed architecture enables learning features suitable to generalize the learning process across images with different statistics. Our architecture can accurately segment images that have no reference in the training set, whereas a minimal refinement of the trained network significantly boosts the segmentation accuracy.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Independent Positive Semidefinite Tensor Analysis in Blind Source Separation.\n \n \n \n \n\n\n \n Ikeshita, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1652-1656, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"IndependentPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553546,\n  author = {R. Ikeshita},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Independent Positive Semidefinite Tensor Analysis in Blind Source Separation},\n  year = {2018},\n  pages = {1652-1656},\n  abstract = {The issue of convolutive blind source separation (BSS) is addressed in this paper. Independent low-rank matrix analysis (ILRMA), unifying frequency-domain independent component analysis (FDICA) and nonnegative matrix factorization (NMF), is a method that has recently proposed to model low-rank structure of source spectra by using NMF in addition to independence between sources used in FDICA and independent vector analysis (IVA). Although ILRMA has been shown to provide better separation performance than FDICA and IVA, the frequency components of each source are assumed to be independent in ILRMA due to NMF modeling of source spectra, which may degrade its performance when the short-term Fourier transform (STFT) is unable to decorrelate the frequency components for each source. This paper therefore presents a new BSS method that unifies IVA and positive semidefinite tensor factorization (PSDTF). PSDTF models not only power spectra in the same way NMF does but also models the correlations between frequency bins in each source. The proposed method can be viewed as a multichannel extension of PSDTF and exploits both the independence between sources and the inter-frequency correlations as a clue for separating mixtures. Experimental results indicate the improved performance of our approach.},\n  keywords = {blind source separation;Fourier transforms;independent component analysis;matrix decomposition;tensors;vectors;NMF;interfrequency correlations;convolutive blind source separation method;independent low-rank matrix analysis;frequency-domain independent component analysis;short-term Fourier transform;STFT;NMF modeling;separation performance;independent vector analysis;nonnegative matrix factorization;FDICA;ILRMA;low-rank matrix analysis;independent positive semidefinite tensor analysis;PSDTF models;positive semidefinite tensor factorization;IVA;BSS method;Optimization;Tensile stress;Correlation;Blind source separation;Time-frequency analysis;Signal processing algorithms;IP networks;Blind source separation;nonnegative matrix factorization;positive semidefinite tensor factorization;independent component analysis;independent vector analysis},\n  doi = {10.23919/EUSIPCO.2018.8553546},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436354.pdf},\n}\n\n
\n
\n\n\n
\n The issue of convolutive blind source separation (BSS) is addressed in this paper. Independent low-rank matrix analysis (ILRMA), unifying frequency-domain independent component analysis (FDICA) and nonnegative matrix factorization (NMF), is a method that has recently proposed to model low-rank structure of source spectra by using NMF in addition to independence between sources used in FDICA and independent vector analysis (IVA). Although ILRMA has been shown to provide better separation performance than FDICA and IVA, the frequency components of each source are assumed to be independent in ILRMA due to NMF modeling of source spectra, which may degrade its performance when the short-term Fourier transform (STFT) is unable to decorrelate the frequency components for each source. This paper therefore presents a new BSS method that unifies IVA and positive semidefinite tensor factorization (PSDTF). PSDTF models not only power spectra in the same way NMF does but also models the correlations between frequency bins in each source. The proposed method can be viewed as a multichannel extension of PSDTF and exploits both the independence between sources and the inter-frequency correlations as a clue for separating mixtures. Experimental results indicate the improved performance of our approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Room Impulse Response Measurement Using Perfect Sequences for Wiener Nonlinear Filters.\n \n \n \n \n\n\n \n Carini, A.; Cecchi, S.; Terenzi, A.; and Orcioni, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 982-986, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553547,\n  author = {A. Carini and S. Cecchi and A. Terenzi and S. Orcioni},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On Room Impulse Response Measurement Using Perfect Sequences for Wiener Nonlinear Filters},\n  year = {2018},\n  pages = {982-986},\n  abstract = {In a recent paper, we have proposed a novel approach for measuring the room impulse response (RIR) robust toward the nonlinearities affecting the power amplifier or the loudspeaker. The approach is implemented by modeling the acoustic path as a Legendre nonlinear (LN) filter and by measuring the first-order kernel using perfect periodic sequences (PPSs) and the cross-correlation method. PPSs are periodic sequences that guarantee the perfect orthogonality of the basis functions of a certain nonlinear filter over a period. For LN filters, PPSs have approximately a uniform distribution. We have shown that also the Wiener Nonlinear (WN) filters, which derive from the truncation of the Wiener series, admit PPSs, whose sample distribution approximates a Gaussian distribution. Thus, WN filters and their PPSs appear more appealing for measuring the RIR. The paper discusses RIR measurement using WN filters and PPSs and explains how PPSs for WN filter suitable for RIR identification can be developed. Experimental results, using signals affected by real nonlinear devices, illustrate the effectiveness of the proposed approach and compare it with that based on LN filters.},\n  keywords = {approximation theory;correlation methods;Gaussian distribution;nonlinear filters;transient response;Wiener filters;nonlinearities;Legendre nonlinear filter;perfect periodic sequences;perfect orthogonality;Wiener series;WN filters;uniform distribution;RIR measurement;power amplifier;Gaussian distribution;wiener nonlinear filters;room impulse response measurement;LN filters;nonlinear devices;PPSs;Kernel;Loudspeakers;Acoustic measurements;Power measurement;Acoustics;Signal processing;Linear systems},\n  doi = {10.23919/EUSIPCO.2018.8553547},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438098.pdf},\n}\n\n
\n
\n\n\n
\n In a recent paper, we have proposed a novel approach for measuring the room impulse response (RIR) robust toward the nonlinearities affecting the power amplifier or the loudspeaker. The approach is implemented by modeling the acoustic path as a Legendre nonlinear (LN) filter and by measuring the first-order kernel using perfect periodic sequences (PPSs) and the cross-correlation method. PPSs are periodic sequences that guarantee the perfect orthogonality of the basis functions of a certain nonlinear filter over a period. For LN filters, PPSs have approximately a uniform distribution. We have shown that also the Wiener Nonlinear (WN) filters, which derive from the truncation of the Wiener series, admit PPSs, whose sample distribution approximates a Gaussian distribution. Thus, WN filters and their PPSs appear more appealing for measuring the RIR. The paper discusses RIR measurement using WN filters and PPSs and explains how PPSs for WN filter suitable for RIR identification can be developed. Experimental results, using signals affected by real nonlinear devices, illustrate the effectiveness of the proposed approach and compare it with that based on LN filters.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Reconstruction of the Virtual Microphone Signal Based on the Distributed Ray Space Transform.\n \n \n \n \n\n\n \n Pezzoli, M.; Borra, F.; Antonacci, F.; Sarti, A.; and Tubaro, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1537-1541, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ReconstructionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553548,\n  author = {M. Pezzoli and F. Borra and F. Antonacci and A. Sarti and S. Tubaro},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Reconstruction of the Virtual Microphone Signal Based on the Distributed Ray Space Transform},\n  year = {2018},\n  pages = {1537-1541},\n  abstract = {In this paper we propose a technique for the reconstruction of the sound field at arbitrary positions based on a parametric sound field description. The methodology consists in the estimation of the sources model parameters (source position, radiation pattern and source signal), starting from the signals acquired by arbitrarily distributed microphone arrays. Given the model parameters it is possible to synthesize the signal of a virtual microphone at an arbitrary position and with an arbitrary pickup pattern.},\n  keywords = {acoustic signal processing;audio recording;microphone arrays;microphones;arbitrary pickup pattern;virtual microphone signal;distributed ray space transform;arbitrary position;parametric sound field description;sources model parameters;source position;radiation pattern;source signal;arbitrarily distributed microphone arrays;Acoustics;Microphone arrays;Estimation;Transforms;Array signal processing;Europe;virtual microphone;distributed microphone networks;source localization},\n  doi = {10.23919/EUSIPCO.2018.8553548},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437330.pdf},\n}\n\n
\n
\n\n\n
\n In this paper we propose a technique for the reconstruction of the sound field at arbitrary positions based on a parametric sound field description. The methodology consists in the estimation of the sources model parameters (source position, radiation pattern and source signal), starting from the signals acquired by arbitrarily distributed microphone arrays. Given the model parameters it is possible to synthesize the signal of a virtual microphone at an arbitrary position and with an arbitrary pickup pattern.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Techniques for gravitational-wave detection of compact binary coalescence.\n \n \n \n \n\n\n \n Caudill, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2633-2637, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"TechniquesPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553549,\n  author = {S. Caudill},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Techniques for gravitational-wave detection of compact binary coalescence},\n  year = {2018},\n  pages = {2633-2637},\n  abstract = {In September 2015, the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) came online with unprecedented sensitivity. Now with two observation runs completed, LIGO has detected gravitational waves from five binary black hole mergers and one neutron star merger. The Advanced Virgo detector also recently came online in August 2017, significantly improving the sky localization of two of these events. The identification of these signals relies on techniques that can clearly distinguish a gravitational-wave signature from transient detector noise. With the next LIGO and Virgo observation run expected to begin in the fall of 2018, more detections are expected with the potential for discovery of new types of astrophysical sources.},\n  keywords = {binary stars;black holes;gravitational waves;neutron stars;transient detector noise;gravitational-wave signature;Advanced Virgo detector;neutron star merger;binary black hole mergers;LIGO;Advanced Laser Interferometer Gravitational-wave Observatory;compact binary coalescence;gravitational-wave detection;Virgo observation;Detectors;Neutrons;Signal to noise ratio;Transient analysis;Chirp;Corporate acquisitions;Europe;gravitational waves},\n  doi = {10.23919/EUSIPCO.2018.8553549},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439410.pdf},\n}\n\n
\n
\n\n\n
\n In September 2015, the Advanced Laser Interferometer Gravitational-wave Observatory (LIGO) came online with unprecedented sensitivity. Now with two observation runs completed, LIGO has detected gravitational waves from five binary black hole mergers and one neutron star merger. The Advanced Virgo detector also recently came online in August 2017, significantly improving the sky localization of two of these events. The identification of these signals relies on techniques that can clearly distinguish a gravitational-wave signature from transient detector noise. With the next LIGO and Virgo observation run expected to begin in the fall of 2018, more detections are expected with the potential for discovery of new types of astrophysical sources.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Semi-Blind Subspace Channel Estimation for MIMO-OFDM System.\n \n \n \n \n\n\n \n Ladaycia, A.; Abed-Meraiml, K.; Mokraoui, A.; and Belouchrani, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1282-1286, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553550,\n  author = {A. Ladaycia and K. Abed-Meraiml and A. Mokraoui and A. Belouchrani},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Semi-Blind Subspace Channel Estimation for MIMO-OFDM System},\n  year = {2018},\n  pages = {1282-1286},\n  abstract = {This paper deals with channel estimation for Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) wireless communications systems. Herein, we propose a semi-blind (SB) subspace channel estimation technique for which an identifiability result is first established for the subspace based criterion. Our algorithm adopts the MIMO-OFDM system model without cyclic prefix and takes advantage of the circulant property of the channel matrix to achieve lower computational complexity and to accelerate the algorithm's convergence by generating a group of sub vectors from each received OFDM symbol. Then, through simulations, we show that the proposed method leads to a significant performance gain as compared to the existing SB subspace methods as well as to the classical last-squares channel estimator.},\n  keywords = {channel estimation;computational complexity;MIMO communication;OFDM modulation;multiple-input multiple-output orthogonal frequency division multiplexing wireless communications systems;last-squares channel estimator;SB subspace methods;channel matrix;MIMO-OFDM system model;subspace based criterion;semiblind subspace channel estimation technique;Channel estimation;OFDM;Estimation;Covariance matrices;Matrix decomposition;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553550},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437248.pdf},\n}\n\n
\n
\n\n\n
\n This paper deals with channel estimation for Multiple-Input Multiple-Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) wireless communications systems. Herein, we propose a semi-blind (SB) subspace channel estimation technique for which an identifiability result is first established for the subspace based criterion. Our algorithm adopts the MIMO-OFDM system model without cyclic prefix and takes advantage of the circulant property of the channel matrix to achieve lower computational complexity and to accelerate the algorithm's convergence by generating a group of sub vectors from each received OFDM symbol. Then, through simulations, we show that the proposed method leads to a significant performance gain as compared to the existing SB subspace methods as well as to the classical last-squares channel estimator.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Structured Dictionary Learning for Compressive Speech Sensing.\n \n \n \n \n\n\n \n Ji, Y.; Zhu, W.; and Champagne, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 573-577, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"StructuredPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553551,\n  author = {Y. Ji and W. Zhu and B. Champagne},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Structured Dictionary Learning for Compressive Speech Sensing},\n  year = {2018},\n  pages = {573-577},\n  abstract = {Sparse dictionary learning aims at training appropriate redundant dictionaries for specific tasks of signal processing, such as signal estimation, compression and classification. Most of the existing dictionary learning algorithms for compressive speech sensing only exploit speech samples to construct the dictionary. In this paper, we propose to leverage both the speech signal and its linear prediction coefficients jointly to learn a structured and sparse dictionary. The proposed dictionary is designed based on a new optimization strategy using both l0 and l2 norms to enforce sparsity and structure, respectively. The resulting optimization problem can be solved by a fast iterative algorithm in two stages. Experimental results indicate that our proposed algorithm converges faster than the reference methods while yielding a better objective evaluation performance in terms of segmental signal-to-noise ratio, perceptual evaluation of speech quality and short-time objective intelligibility of the reconstructed speech.},\n  keywords = {compressed sensing;dictionaries;iterative methods;learning (artificial intelligence);optimisation;sparse matrices;speech coding;redundant dictionaries;signal compression;signal classification;optimization problem;signal-to-noise ratio;speech reconstruction;sparse dictionary learning;dictionary learning algorithms;optimization strategy;speech processing;speech quality;fast iterative algorithm;linear prediction coefficients;speech signal;signal estimation;signal processing;compressive speech sensing;structured dictionary learning;Dictionaries;Sparse matrices;Machine learning;Signal processing algorithms;Optimization;Speech processing;Training;dictionary learning;speech processing;compressive sensing;optimization},\n  doi = {10.23919/EUSIPCO.2018.8553551},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437529.pdf},\n}\n\n
\n
\n\n\n
\n Sparse dictionary learning aims at training appropriate redundant dictionaries for specific tasks of signal processing, such as signal estimation, compression and classification. Most of the existing dictionary learning algorithms for compressive speech sensing only exploit speech samples to construct the dictionary. In this paper, we propose to leverage both the speech signal and its linear prediction coefficients jointly to learn a structured and sparse dictionary. The proposed dictionary is designed based on a new optimization strategy using both l0 and l2 norms to enforce sparsity and structure, respectively. The resulting optimization problem can be solved by a fast iterative algorithm in two stages. Experimental results indicate that our proposed algorithm converges faster than the reference methods while yielding a better objective evaluation performance in terms of segmental signal-to-noise ratio, perceptual evaluation of speech quality and short-time objective intelligibility of the reconstructed speech.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n P-Score: Performance Aligned Normalization and an Evaluation in Score-Level Multi-Biometric Fusion.\n \n \n \n \n\n\n \n Damer, N.; Boutros, F.; Terhörst, P.; Braun, A.; and Kuijper, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1402-1406, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"P-Score:Paper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553553,\n  author = {N. Damer and F. Boutros and P. Terhörst and A. Braun and A. Kuijper},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {P-Score: Performance Aligned Normalization and an Evaluation in Score-Level Multi-Biometric Fusion},\n  year = {2018},\n  pages = {1402-1406},\n  abstract = {Normalization is an important step for different fusion, classification, and decision making applications. Previous normalization approaches considered bringing values from different sources into a common range or distribution characteristics. In this work we propose a new normalization approach that transfers values into a normalized space where their relative performance in binary decision making is aligned across their whole range. Multi-biometric verification is a typical problem where information from different sources are normalized and fused to make a binary decision and therefore a good platform to evaluate the proposed normalization. We conducted an evaluation on two publicly available databases and showed that the normalization solution we are proposing consistently outperformed state-of-the-art and best practice approaches, e.g. by reducing the false rejection rate at 0.01% false acceptance rate by 60-75% compared to the widely used z-score normalization under the sum-rule fusion.},\n  keywords = {decision making;fingerprint identification;image classification;image fusion;p-score;score-level multibiometric fusion;decision making applications;distribution characteristics;normalization approach;normalized space;binary decision making;multibiometric verification;normalization solution;z-score normalization;sum-rule fusion;Standards;Databases;Face;Europe;Signal processing;Error analysis;Gaussian distribution},\n  doi = {10.23919/EUSIPCO.2018.8553553},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437327.pdf},\n}\n\n
\n
\n\n\n
\n Normalization is an important step for different fusion, classification, and decision making applications. Previous normalization approaches considered bringing values from different sources into a common range or distribution characteristics. In this work we propose a new normalization approach that transfers values into a normalized space where their relative performance in binary decision making is aligned across their whole range. Multi-biometric verification is a typical problem where information from different sources are normalized and fused to make a binary decision and therefore a good platform to evaluate the proposed normalization. We conducted an evaluation on two publicly available databases and showed that the normalization solution we are proposing consistently outperformed state-of-the-art and best practice approaches, e.g. by reducing the false rejection rate at 0.01% false acceptance rate by 60-75% compared to the widely used z-score normalization under the sum-rule fusion.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Hypo and Hyperarticulated Speech Data Augmentation for Spontaneous Speech Recognition.\n \n \n \n \n\n\n \n Lee, S. J.; Kang, B.; Chung, H.; Park, J. G.; and Lee, Y. K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2080-2084, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"HypoPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553555,\n  author = {S. J. Lee and B. Kang and H. Chung and J. G. Park and Y. K. Lee},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Hypo and Hyperarticulated Speech Data Augmentation for Spontaneous Speech Recognition},\n  year = {2018},\n  pages = {2080-2084},\n  abstract = {Among many challenges in spontaneous speech recognition, we focus on the variability of speech depending on the degree of articulation such as hypo and hyperarticulation. In this paper, we investigate the feasibility of the past acoustic-phonetic studies on the variability of speech in terms of the data augmentation of a spontaneous speech recognition system. To do so, we develop data augmentation approaches to reflect the acoustic-phonetic characteristics of hypo and hyper-articulated speech. Since our approaches are based on signal processing methods they do not require a model learned from supervised or unsupervised data. A series of speech recognition tests are conducted across various speech styles. The results show that we are able to achieve meaningful performance gain by using our approaches. It also indicates that the past acoustic-phonetic knowledge of the variability of speech is useful for improving the recognition performance of spontaneous speech including hypo and hyper-articulated speech.},\n  keywords = {acoustic signal processing;speech;speech recognition;spontaneous speech recognition system;data augmentation approaches;acoustic-phonetic characteristics;speech recognition tests;speech styles;acoustic-phonetic knowledge;recognition performance;acoustic-phonetic studies;hyperarticulated speech data augmentation;hypo-articulated speech;Speech recognition;Acoustics;Speech;Speech processing;Production;Maximum likelihood detection;Nonlinear filters;Speech recognition;data augmentation;hypo and hyperarticulation;speech sythesis},\n  doi = {10.23919/EUSIPCO.2018.8553555},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435096.pdf},\n}\n\n
\n
\n\n\n
\n Among many challenges in spontaneous speech recognition, we focus on the variability of speech depending on the degree of articulation such as hypo and hyperarticulation. In this paper, we investigate the feasibility of the past acoustic-phonetic studies on the variability of speech in terms of the data augmentation of a spontaneous speech recognition system. To do so, we develop data augmentation approaches to reflect the acoustic-phonetic characteristics of hypo and hyper-articulated speech. Since our approaches are based on signal processing methods they do not require a model learned from supervised or unsupervised data. A series of speech recognition tests are conducted across various speech styles. The results show that we are able to achieve meaningful performance gain by using our approaches. It also indicates that the past acoustic-phonetic knowledge of the variability of speech is useful for improving the recognition performance of spontaneous speech including hypo and hyper-articulated speech.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n 1-Bit Massive MIMO Downlink Based on Constructive Interference.\n \n \n \n \n\n\n \n Li, A.; Masouros, C.; and Swindlehurst, A. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 927-931, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"1-BitPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553556,\n  author = {A. Li and C. Masouros and A. L. Swindlehurst},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {1-Bit Massive MIMO Downlink Based on Constructive Interference},\n  year = {2018},\n  pages = {927-931},\n  abstract = {In this paper, we focus on the multiuser massive multiple-input single-output (MISO) downlink with low-cost 1-bit digital-to-analog converters (DACs) for PSK modulation, and propose a low-complexity refinement process that is applicable to any existing 1-bit precoding approaches based on the constructive interference (CI) formulation. With the decomposition of the signals along the detection thresholds, we first formulate a simple symbol-scaling method as the performance metric. The low-complexity refinement approach is subsequently introduced, where we aim to improve the introduced symbol-scaling performance metric by modifying the transmit signal on one antenna at a time. Numerical results validate the effectiveness of the proposed refinement method on existing approaches for massive MIMO with 1-bit DACs, and the performance improvements are most significant for the low-complexity quantized zero-forcing (ZF) method.},\n  keywords = {antenna arrays;computational complexity;digital-analogue conversion;phase shift keying;precoding;quantisation (signal);radiofrequency interference;signal detection;low-complexity quantized ZF method;low-complexity quantized zero-forcing method;symbol-scaling performance metric;signal decomposition;1-bit precoding approaches;low-cost 1-bit digital-to-analog converters;1-bit DAC;low-complexity refinement process;PSK modulation;multiuser massive multiple-input single-output downlink;1-bit massive MIMO downlink;performance improvements;refinement method;transmit signal;performance metric;simple symbol-scaling method;detection thresholds;constructive interference formulation;Precoding;MIMO communication;Measurement;Downlink;Interference;Phase shift keying;Massive MIMO;1-bit quantization;refinement;constructive interference},\n  doi = {10.23919/EUSIPCO.2018.8553556},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432232.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we focus on the multiuser massive multiple-input single-output (MISO) downlink with low-cost 1-bit digital-to-analog converters (DACs) for PSK modulation, and propose a low-complexity refinement process that is applicable to any existing 1-bit precoding approaches based on the constructive interference (CI) formulation. With the decomposition of the signals along the detection thresholds, we first formulate a simple symbol-scaling method as the performance metric. The low-complexity refinement approach is subsequently introduced, where we aim to improve the introduced symbol-scaling performance metric by modifying the transmit signal on one antenna at a time. Numerical results validate the effectiveness of the proposed refinement method on existing approaches for massive MIMO with 1-bit DACs, and the performance improvements are most significant for the low-complexity quantized zero-forcing (ZF) method.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sampling and Reconstruction of Band-limited Graph Signals using Graph Syndromes.\n \n \n \n \n\n\n \n Kumar, A. A.; Narendra, N.; Chandra, M. G.; and Kumar, K.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 892-896, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SamplingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553557,\n  author = {A. A. Kumar and N. Narendra and M. G. Chandra and K. Kumar},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sampling and Reconstruction of Band-limited Graph Signals using Graph Syndromes},\n  year = {2018},\n  pages = {892-896},\n  abstract = {The problem of sampling and reconstruction of band-limited graph signals is considered in this paper. A new sampling and reconstruction method based on the idea of error and erasure correction is proposed. We visualize the process of sampling as removal of nodes akin to introducing erasures, due to which the graph syndromes of a sampled signal gives rise to significant values, which otherwise would be minuscule for a band-limited signal. A reconstruction method by making use of these significant values in the graph syndromes is described and correspondingly the necessary and sufficient conditions for unique recovery and some key properties is provided. Additionally, this method allows for robust reconstruction i.e., reconstruction in the presence of few corrupted sampled nodes and a method based on weighted l1 - norm is described. Simulation results are provided to demonstrate the efficiency of the method which shows better mean squared error performance compared to existing methods.},\n  keywords = {graph theory;mean square error methods;signal reconstruction;signal sampling;graph syndromes;sampled signal;band-limited signal;robust reconstruction;corrupted sampled nodes;band-limited graph signals;erasure correction;mean squared error performance;weighted l1-norm;Signal processing;Europe;Reconstruction algorithms;Laplace equations;Parity check codes;Symmetric matrices;Eigenvalues and eigenfunctions;Graph signal processing;Graph syndrome;error correction;Sampling and reconstruction;Robust reconstruction},\n  doi = {10.23919/EUSIPCO.2018.8553557},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437603.pdf},\n}\n\n
\n
\n\n\n
\n The problem of sampling and reconstruction of band-limited graph signals is considered in this paper. A new sampling and reconstruction method based on the idea of error and erasure correction is proposed. We visualize the process of sampling as removal of nodes akin to introducing erasures, due to which the graph syndromes of a sampled signal gives rise to significant values, which otherwise would be minuscule for a band-limited signal. A reconstruction method by making use of these significant values in the graph syndromes is described and correspondingly the necessary and sufficient conditions for unique recovery and some key properties is provided. Additionally, this method allows for robust reconstruction i.e., reconstruction in the presence of few corrupted sampled nodes and a method based on weighted l1 - norm is described. Simulation results are provided to demonstrate the efficiency of the method which shows better mean squared error performance compared to existing methods.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Study on the Impact of Visualization Techniques on Light Field Perception.\n \n \n \n \n\n\n \n Battisti, F.; Carli, M.; and Callet, P. L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2155-2159, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553558,\n  author = {F. Battisti and M. Carli and P. L. Callet},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Study on the Impact of Visualization Techniques on Light Field Perception},\n  year = {2018},\n  pages = {2155-2159},\n  abstract = {Light Field imaging is a promising technology that allows to capture the whole set of light rays in a scene thus enabling the generation of perspective views from any position. This possibility can be exploited in several application scenarios, such as virtual and augmented reality or depth estimation. In this framework many issues arise due different aspects such as the large amount of generated data or to the need of dedicated and expensive hardware for Light Field capturing. Moreover, the Light Field carries information about the entire scene and the data that is delivered to the users largely differs from the traditional 2D and 3D media in terms of content and way of fruition. Dedicated rendering technology and devices for the Light Field are nowadays still not mature or quite expensive and the best option is to render the Light Field data on a conventional 2D screen. Consequently, there is the need for finding the best visualization technique that allows to exploit the information in the Light Field while being accepted by the viewers. In this paper we address this issue by considering six visualization options and by running experimental tests to study which is the technique preferred by the users.},\n  keywords = {augmented reality;data visualisation;rendering (computer graphics);rendering technology;2D screen;depth estimation;Light Field data;Light Field capturing;generated data;augmented reality;light rays;Light Field imaging;Light Field perception;visualization technique;Cameras;Two dimensional displays;Rendering (computer graphics);Standards;Three-dimensional displays;Image coding},\n  doi = {10.23919/EUSIPCO.2018.8553558},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437042.pdf},\n}\n\n
\n
\n\n\n
\n Light Field imaging is a promising technology that allows to capture the whole set of light rays in a scene thus enabling the generation of perspective views from any position. This possibility can be exploited in several application scenarios, such as virtual and augmented reality or depth estimation. In this framework many issues arise due different aspects such as the large amount of generated data or to the need of dedicated and expensive hardware for Light Field capturing. Moreover, the Light Field carries information about the entire scene and the data that is delivered to the users largely differs from the traditional 2D and 3D media in terms of content and way of fruition. Dedicated rendering technology and devices for the Light Field are nowadays still not mature or quite expensive and the best option is to render the Light Field data on a conventional 2D screen. Consequently, there is the need for finding the best visualization technique that allows to exploit the information in the Light Field while being accepted by the viewers. In this paper we address this issue by considering six visualization options and by running experimental tests to study which is the technique preferred by the users.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Efficient Ambiguity Resolution in Polarimetric Multi-View Stereo.\n \n \n \n \n\n\n \n Kumar, A. A.; Narendra, N.; Balamuralidhar, P.; and Chandra, M. G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 395-399, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"EfficientPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553559,\n  author = {A. A. Kumar and N. Narendra and P. Balamuralidhar and M. G. Chandra},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Efficient Ambiguity Resolution in Polarimetric Multi-View Stereo},\n  year = {2018},\n  pages = {395-399},\n  abstract = {Polarimetric multi-view stereo (PMS) reconstructs the dense 3D surface of a feature sparse object by combining the photometric information from polarization with the epipolar constraints from multiple views. In this paper, we propose a new approach based on the recent advances in graph signal processing (GSP) for efficient ambiguity resolution in PMS. A smooth graph which effectively captures the relational structure of the azimuth values is constructed using the estimated phase angle. By visualizing the actual azimuth available at the reliable depth points (corresponding to the feature-rich region) as sampled graph signal, the azimuth at the remaining feature-limited region is estimated. Unlike the existing ambiguity resolution scheme in PMS which resolves only the π/2-ambiguity, the proposed approach resolves both the π and π/2-ambiguity. Simulation results are presented, which shows that in addition to resolving both the ambiguities, the proposed GSP based method performs significantly better in resolving the π/2-ambiguity than the existing approach.},\n  keywords = {graph theory;image reconstruction;radar polarimetry;stereo image processing;efficient ambiguity resolution;polarimetric multiview stereo;PMS;dense 3D surface;feature sparse object;photometric information;graph signal processing;smooth graph;azimuth values;feature-rich region;sampled graph signal;ambiguity resolution scheme;Azimuth;Signal resolution;Three-dimensional displays;Surface reconstruction;Shape;Estimation;Symmetric matrices;Shape from Polarization;Polarimetric multiview stereo;azimuth ambiguity;Graph signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553559},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436401.pdf},\n}\n\n
\n
\n\n\n
\n Polarimetric multi-view stereo (PMS) reconstructs the dense 3D surface of a feature sparse object by combining the photometric information from polarization with the epipolar constraints from multiple views. In this paper, we propose a new approach based on the recent advances in graph signal processing (GSP) for efficient ambiguity resolution in PMS. A smooth graph which effectively captures the relational structure of the azimuth values is constructed using the estimated phase angle. By visualizing the actual azimuth available at the reliable depth points (corresponding to the feature-rich region) as sampled graph signal, the azimuth at the remaining feature-limited region is estimated. Unlike the existing ambiguity resolution scheme in PMS which resolves only the π/2-ambiguity, the proposed approach resolves both the π and π/2-ambiguity. Simulation results are presented, which shows that in addition to resolving both the ambiguities, the proposed GSP based method performs significantly better in resolving the π/2-ambiguity than the existing approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Analysis of Adversarial Attacks against CNN-based Image Forgery Detectors.\n \n \n \n \n\n\n \n Gragnaniello, D.; Marra, F.; Poggi, G.; and Verdoliva, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 967-971, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnalysisPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553560,\n  author = {D. Gragnaniello and F. Marra and G. Poggi and L. Verdoliva},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Analysis of Adversarial Attacks against CNN-based Image Forgery Detectors},\n  year = {2018},\n  pages = {967-971},\n  abstract = {With the ubiquitous diffusion of social networks, images are becoming a dominant and powerful communication channel. Not surprisingly, they are also increasingly subject to manipulations aimed at distorting information and spreading fake news. In recent years, the scientific community has devoted major efforts to contrast this menace, and many image forgery detectors have been proposed. Currently, due to the success of deep learning in many multimedia processing tasks, there is high interest towards CNN-based detectors, and early results are already very promising. Recent studies in computer vision, however, have shown CNNs to be highly vulnerable to adversarial attacks, small perturbations of the input data which drive the network towards erroneous classification. In this paper we analyze the vulnerability of CNN-based image forensics methods to adversarial attacks, considering several detectors and several types of attack, and testing performance on a wide range of common manipulations, both easily and hardly detectable.},\n  keywords = {convolution;feedforward neural nets;image classification;image forensics;learning (artificial intelligence);multimedia systems;object detection;social networking (online);ubiquitous computing;deep learning;CNN-based detectors;CNN-based image forensics methods;CNN-based image forgery detectors;ubiquitous diffusion;social networks;communication channel;adversarial attacks analysis;information distortion;fake news;image classification;multimedia processing tasks;Detectors;Feature extraction;Forensics;Training;Signal processing;Tools;Image counterforensics;convolutional neural networks;generative adversarial networks},\n  doi = {10.23919/EUSIPCO.2018.8553560},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439399.pdf},\n}\n\n
\n
\n\n\n
\n With the ubiquitous diffusion of social networks, images are becoming a dominant and powerful communication channel. Not surprisingly, they are also increasingly subject to manipulations aimed at distorting information and spreading fake news. In recent years, the scientific community has devoted major efforts to contrast this menace, and many image forgery detectors have been proposed. Currently, due to the success of deep learning in many multimedia processing tasks, there is high interest towards CNN-based detectors, and early results are already very promising. Recent studies in computer vision, however, have shown CNNs to be highly vulnerable to adversarial attacks, small perturbations of the input data which drive the network towards erroneous classification. In this paper we analyze the vulnerability of CNN-based image forensics methods to adversarial attacks, considering several detectors and several types of attack, and testing performance on a wide range of common manipulations, both easily and hardly detectable.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Entry-wise Matrix Completion from Noisy Entries.\n \n \n \n \n\n\n \n Sabetsarvestani, Z.; Kiraly, F.; Miguel, R.; and Rodrigues, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2603-2607, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Entry-wisePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553561,\n  author = {Z. Sabetsarvestani and F. Kiraly and R. Miguel and D. Rodrigues},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Entry-wise Matrix Completion from Noisy Entries},\n  year = {2018},\n  pages = {2603-2607},\n  abstract = {We address the problem of entry-wise low-rank matrix completion in the noisy observation model. We propose a new noise robust estimator where we characterize the bias and variance of the estimator in a finite sample setting. Utilizing this estimator, we provide a new robust local matrix completion algorithm that outperforms other classic methods in reconstructing large rectangular matrices arising in a wide range of applications such as athletic performance prediction and recommender systems. The simulation results on synthetic and real data show that our algorithm outperforms other state-of-the-art and baseline algorithms in matrix completion in reconstructing rectangular matrices.},\n  keywords = {estimation theory;matrix algebra;signal processing;entry-wise matrix completion;noisy entries;entry-wise low-rank matrix completion;noisy observation model;noise robust estimator;finite sample setting;robust local matrix completion algorithm;rectangular matrices;athletic performance prediction;recommender systems;Matrices;Noise measurement;Prediction algorithms;Signal processing algorithms;Estimation;Europe;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553561},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437801.pdf},\n}\n\n
\n
\n\n\n
\n We address the problem of entry-wise low-rank matrix completion in the noisy observation model. We propose a new noise robust estimator where we characterize the bias and variance of the estimator in a finite sample setting. Utilizing this estimator, we provide a new robust local matrix completion algorithm that outperforms other classic methods in reconstructing large rectangular matrices arising in a wide range of applications such as athletic performance prediction and recommender systems. The simulation results on synthetic and real data show that our algorithm outperforms other state-of-the-art and baseline algorithms in matrix completion in reconstructing rectangular matrices.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unified Stochastic Reverberation Modeling.\n \n \n \n \n\n\n \n Badeau, R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2175-2179, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"UnifiedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553562,\n  author = {R. Badeau},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Unified Stochastic Reverberation Modeling},\n  year = {2018},\n  pages = {2175-2179},\n  abstract = {In the field of room acoustics, it is well known that reverberation can be characterized statistically in a particular region of the time-frequency domain (after the transition time and above Schroeder's frequency). Since the 1950s, various formulas have been established, focusing on particular aspects of reverberation: exponential decay over time, correlations between frequencies, correlations between sensors at each frequency, and time-frequency distribution. In this paper, we introduce a new stochastic reverberation model, that permits us to retrieve all these well-known results within a common mathematical framework. To the best of our knowledge, this is the first time that such a unification work is presented. The benefits are multiple: several new formulas generalizing the classical results are established, that jointly characterize the spatial, temporal and spectral properties of late reverberation.},\n  keywords = {architectural acoustics;Gaussian processes;reverberation;stochastic processes;time-frequency analysis;room acoustics;unified stochastic reverberation modeling;common mathematical framework;stochastic reverberation model;time-frequency distribution;exponential decay;Schroeder's frequency;transition time;time-frequency domain;Reverberation;Microphones;Stochastic processes;Sensors;Time-frequency analysis;Mathematical model;Trajectory;Reverberation;room impulse response;room frequency response;stochastic models;Poisson processes;stationary processes;Wigner distribution},\n  doi = {10.23919/EUSIPCO.2018.8553562},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435847.pdf},\n}\n\n
\n
\n\n\n
\n In the field of room acoustics, it is well known that reverberation can be characterized statistically in a particular region of the time-frequency domain (after the transition time and above Schroeder's frequency). Since the 1950s, various formulas have been established, focusing on particular aspects of reverberation: exponential decay over time, correlations between frequencies, correlations between sensors at each frequency, and time-frequency distribution. In this paper, we introduce a new stochastic reverberation model, that permits us to retrieve all these well-known results within a common mathematical framework. To the best of our knowledge, this is the first time that such a unification work is presented. The benefits are multiple: several new formulas generalizing the classical results are established, that jointly characterize the spatial, temporal and spectral properties of late reverberation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Extension of Averaged-Operator-Based algorithms.\n \n \n \n \n\n\n \n Simões, M.; Bioucas-Dias, J.; and Almeida, L. B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 752-756, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553563,\n  author = {M. Simões and J. Bioucas-Dias and L. B. Almeida},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Extension of Averaged-Operator-Based algorithms},\n  year = {2018},\n  pages = {752-756},\n  abstract = {Many of the algorithms used to solve minimization problems with sparsity-inducing regularizers are generic in the sense that they do not take into account the sparsity of the solution in any particular way. However, algorithms known as semismooth Newton are able to take advantage of this sparsity to accelerate their convergence. We show how to extend these algorithms in different directions, and study the convergence of the resulting algorithms by showing that they are a particular case of an extension of the well-known Krasnosel`skiľ-Mann scheme.},\n  keywords = {convergence of numerical methods;mathematical operators;minimisation;Newton method;averaged-operator-based algorithms;minimization problems;sparsity-inducing regularizers;semismooth Newton method;convergence;Signal processing algorithms;Newton method;Convergence;Optimization;Radio frequency;IEEE Sections;Europe;Convex nonsmooth optimization;primal-dual optimization;semismooth Newton method;forward-backward method;variable metric},\n  doi = {10.23919/EUSIPCO.2018.8553563},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438124.pdf},\n}\n\n
\n
\n\n\n
\n Many of the algorithms used to solve minimization problems with sparsity-inducing regularizers are generic in the sense that they do not take into account the sparsity of the solution in any particular way. However, algorithms known as semismooth Newton are able to take advantage of this sparsity to accelerate their convergence. We show how to extend these algorithms in different directions, and study the convergence of the resulting algorithms by showing that they are a particular case of an extension of the well-known Krasnosel`skiľ-Mann scheme.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n LCMV Beamformer with DNN-Based Multichannel Concurrent Speakers Detector.\n \n \n \n \n\n\n \n Chazan, S. E.; Goldberger, J.; and Gannot, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1562-1566, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LCMVPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553564,\n  author = {S. E. Chazan and J. Goldberger and S. Gannot},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {LCMV Beamformer with DNN-Based Multichannel Concurrent Speakers Detector},\n  year = {2018},\n  pages = {1562-1566},\n  abstract = {Application of the linearly constrained minimum variance (LCMV) beamformer (BF) to speaker extraction tasks in real-life scenarios necessitates a sophisticated control mechanism to facilitate the estimation of the noise spatial cross-power spectral density (cPSD) matrix and the relative transfer function (RTF) of all sources of interest. We propose a deep neural network (DNN)-based multichannel concurrent speakers detector (MCCSD) that utilizes all available microphone signals to detect the activity patterns of all speakers. Time frames classified as no active speaker frames will be utilized to estimate the cPSD, while time frames with a single detected speaker will be utilized for estimating the associated RTF. No estimation will take place during concurrent speaker activity. Experimental results show that the multi-channel approach significantly improves its single-channel counterpart.},\n  keywords = {array signal processing;microphones;neural nets;signal classification;speaker recognition;transfer function matrices;LCMV BF;noise spatial cross-power spectral density matrix estimation;noise spatial cPSD matrix estimation;RTF;DNN-based MCCSD;microphone signal;speaker activity pattern detection;time frame classification;multichannel approach;concurrent speaker activity;single detected speaker;active speaker frames;deep neural network-based multichannel concurrent speakers detector;relative transfer function;speaker extraction tasks;linearly constrained minimum variance beamformer;Microphones;Estimation;Detectors;Databases;Dictionaries;Noise measurement;Interference},\n  doi = {10.23919/EUSIPCO.2018.8553564},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437111.pdf},\n}\n\n
\n
\n\n\n
\n Application of the linearly constrained minimum variance (LCMV) beamformer (BF) to speaker extraction tasks in real-life scenarios necessitates a sophisticated control mechanism to facilitate the estimation of the noise spatial cross-power spectral density (cPSD) matrix and the relative transfer function (RTF) of all sources of interest. We propose a deep neural network (DNN)-based multichannel concurrent speakers detector (MCCSD) that utilizes all available microphone signals to detect the activity patterns of all speakers. Time frames classified as no active speaker frames will be utilized to estimate the cPSD, while time frames with a single detected speaker will be utilized for estimating the associated RTF. No estimation will take place during concurrent speaker activity. Experimental results show that the multi-channel approach significantly improves its single-channel counterpart.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Collaborative Speech Dereverberation: Regularized Tensor Factorization for Crowdsourced Multi-Channel Recordings.\n \n \n \n \n\n\n \n Wager, S.; and Kim, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1532-1536, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CollaborativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553565,\n  author = {S. Wager and M. Kim},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Collaborative Speech Dereverberation: Regularized Tensor Factorization for Crowdsourced Multi-Channel Recordings},\n  year = {2018},\n  pages = {1532-1536},\n  abstract = {We propose a regularized nonnegative tensor factorization (NTF) model for multi-channel speech derestriction that incorporates prior knowledge about clean speech. The approach models the problem as recovering a signal convolved with different room impulse responses, allowing the dereverberation problem to benefit from microphone arrays. The factorization learns both individual reverberation filters and channel-specific delays, which makes it possible to employ an ad-hoc microphone array with heterogeneous sensors (such as multi-channel recordings by a crowd) even if they are not synchronized. We integrate two prior-knowledge regularization schemes to increase the stability of dereverberation performance. First, a Nonnegative Matrix Factorization (NMF) inner routine is introduced to inform the original NTF problem of the pre-trained clean speech basis vectors, so that the optimization process can focus on estimating their activations rather than the whole clean speech spectra. Second, the NMF activation matrix is further regularized to take on characteristics of dry signals using sparsity and smoothness constraints. Empirical dereverberation results on different simulated reverberation setups show that the prior-knowledge regularization schemes improve both recovered sound quality and speech intelligibility compared to a baseline NTF approach.},\n  keywords = {crowdsourcing;learning (artificial intelligence);matrix decomposition;microphone arrays;reverberation;sensor arrays;speech intelligibility;speech processing;tensors;vectors;regularized tensor factorization;crowdsourced multichannel recordings;regularized nonnegative tensor factorization model;multichannel speech derestriction;individual reverberation filters;channel-specific delays;ad-hoc microphone array;pre-trained clean speech basis vectors;clean speech spectra;NMF activation matrix;speech intelligibility;baseline NTF approach;room impulse responses;nonnegative matrix factorization;collaborative speech dereverberation problem;heterogeneous sensors;NMF;Tensile stress;Linear programming;Sensors;Speech processing;Reverberation;Convolution;Europe;multi-channel dereverberation;nonnegative matrix factorization;nonnegative tensor factorization;collaborative audio enhancement;speech enhancement},\n  doi = {10.23919/EUSIPCO.2018.8553565},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438378.pdf},\n}\n\n
\n
\n\n\n
\n We propose a regularized nonnegative tensor factorization (NTF) model for multi-channel speech derestriction that incorporates prior knowledge about clean speech. The approach models the problem as recovering a signal convolved with different room impulse responses, allowing the dereverberation problem to benefit from microphone arrays. The factorization learns both individual reverberation filters and channel-specific delays, which makes it possible to employ an ad-hoc microphone array with heterogeneous sensors (such as multi-channel recordings by a crowd) even if they are not synchronized. We integrate two prior-knowledge regularization schemes to increase the stability of dereverberation performance. First, a Nonnegative Matrix Factorization (NMF) inner routine is introduced to inform the original NTF problem of the pre-trained clean speech basis vectors, so that the optimization process can focus on estimating their activations rather than the whole clean speech spectra. Second, the NMF activation matrix is further regularized to take on characteristics of dry signals using sparsity and smoothness constraints. Empirical dereverberation results on different simulated reverberation setups show that the prior-knowledge regularization schemes improve both recovered sound quality and speech intelligibility compared to a baseline NTF approach.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Motor Condition Monitoring by Empirical Wavelet Transform.\n \n \n \n \n\n\n \n Eren, L.; Cekic, Y.; and Devaney, M. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 196-200, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MotorPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553566,\n  author = {L. Eren and Y. Cekic and M. J. Devaney},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Motor Condition Monitoring by Empirical Wavelet Transform},\n  year = {2018},\n  pages = {196-200},\n  abstract = {Bearing faults are by far the biggest single source of motor failures. Both fast Fourier (frequency based) and wavelet (time-scale based) transforms are used commonly in analyzing raw vibration or current data to detect bearing faults. A hybrid method, Empirical Wavelet Transform (EWT), is used in this study to provide better accuracy in detecting faults from bearing vibration data. In the proposed method, the raw vibration data is processed by fast Fourier transform. Then, the Fourier spectrum of the vibration signal is divided into segments adaptively with each segment containing part of the frequency band. Next, the wavelet transform is applied to all segments. Finally, inverse Fourier transform is utilized to obtain time domain signal with the frequency band of interest from EWT coefficients to detect bearing faults. The bearing fault related segments are identified by comparing rms values of healthy bearing vibration signal segments with the same segments of faulty bearing. The main advantage of the proposed method is the possibility of extracting the segments of interest from the original vibration data for determining both fault type and severity.},\n  keywords = {condition monitoring;fast Fourier transforms;fault diagnosis;machine bearings;vibrational signal processing;vibrations;wavelet transforms;motor condition monitoring;Empirical Wavelet Transform;motor failures;bearing fault detection;fast Fourier transforms;vibration signal processing;inverse Fourier transform;empirical wavelet transform;Fourier transform;induction motors;bearing faults component},\n  doi = {10.23919/EUSIPCO.2018.8553566},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439411.pdf},\n}\n\n
\n
\n\n\n
\n Bearing faults are by far the biggest single source of motor failures. Both fast Fourier (frequency based) and wavelet (time-scale based) transforms are used commonly in analyzing raw vibration or current data to detect bearing faults. A hybrid method, Empirical Wavelet Transform (EWT), is used in this study to provide better accuracy in detecting faults from bearing vibration data. In the proposed method, the raw vibration data is processed by fast Fourier transform. Then, the Fourier spectrum of the vibration signal is divided into segments adaptively with each segment containing part of the frequency band. Next, the wavelet transform is applied to all segments. Finally, inverse Fourier transform is utilized to obtain time domain signal with the frequency band of interest from EWT coefficients to detect bearing faults. The bearing fault related segments are identified by comparing rms values of healthy bearing vibration signal segments with the same segments of faulty bearing. The main advantage of the proposed method is the possibility of extracting the segments of interest from the original vibration data for determining both fault type and severity.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Missing Sample Estimation Based on High-Order Sparse Linear Prediction for Audio Signals.\n \n \n \n \n\n\n \n Derebssa Dufera, B.; Eneman, K.; and van Waterschoot , T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2464-2468, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MissingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553567,\n  author = {B. {Derebssa Dufera} and K. Eneman and T. {van Waterschoot}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Missing Sample Estimation Based on High-Order Sparse Linear Prediction for Audio Signals},\n  year = {2018},\n  pages = {2464-2468},\n  abstract = {The restoration of click degraded audio signals is important to achieve acceptable audio quality in many old audio media. Restoration by missing sample estimation based on conventional linear prediction has been extensively researched and used; however, it is hampered by the limitations of the linear prediction model. Recently, it has been shown that high-order sparse linear prediction offers better representation of music and voiced speech over conventional linear prediction. In this paper, the use of high-order sparse linear prediction for missing sample estimation of click degraded audio signals is proposed. The paper also explores a possible computational time saving by combining the high-order sparse linear prediction coefficient determination and filtering operations. Evaluation with different types of speech and audio data show that the proposed method achieves an improvement in SNR over conventional linear prediction based filtering for all considered speech and audio data types.},\n  keywords = {audio signal processing;filtering theory;prediction theory;high-order sparse linear prediction;click degraded audio signals;acceptable audio quality;old audio media;missing sample estimation;conventional linear prediction;filtering operations;high-order sparse linear prediction coefficient determination;high-order sparse linear prediction coefficient determination;Estimation;Signal processing algorithms;Europe;Prediction algorithms;Approximation algorithms;Signal processing;Predictive models;Missing sample estimation;Click degradation;Linear prediction;High-order sparse linear prediction},\n  doi = {10.23919/EUSIPCO.2018.8553567},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436716.pdf},\n}\n\n
\n
\n\n\n
\n The restoration of click degraded audio signals is important to achieve acceptable audio quality in many old audio media. Restoration by missing sample estimation based on conventional linear prediction has been extensively researched and used; however, it is hampered by the limitations of the linear prediction model. Recently, it has been shown that high-order sparse linear prediction offers better representation of music and voiced speech over conventional linear prediction. In this paper, the use of high-order sparse linear prediction for missing sample estimation of click degraded audio signals is proposed. The paper also explores a possible computational time saving by combining the high-order sparse linear prediction coefficient determination and filtering operations. Evaluation with different types of speech and audio data show that the proposed method achieves an improvement in SNR over conventional linear prediction based filtering for all considered speech and audio data types.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Advanced cyclostationary-based analysis for condition monitoring of complex systems.\n \n \n \n \n\n\n \n Gryllias, K.; Mauricio, A.; and Qi, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 385-389, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AdvancedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553568,\n  author = {K. Gryllias and A. Mauricio and J. Qi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Advanced cyclostationary-based analysis for condition monitoring of complex systems},\n  year = {2018},\n  pages = {385-389},\n  abstract = {Wind energy experiences a significant growth during the last decades but the industry is still challenged by premature turbine component failures, which are quite expensive due to the increase of turbines size. The core of wind turbine drivetrains is a planetary gearbox and its rolling element bearings are often responsible for machinery breakdowns. The failure signs of an early bearing damage are usually weak compared to the gear excitation and are hardly detected. As a result there is a special need for advanced signal processing tools which can detect accurately bearing faults. Cyclic Spectral Coherence (CSC) appears to be a strong diagnostic tool but its interpretation is complicated for a non-expert. In this paper a novel CSC based methodology is proposed in order to extract an Improved Envelope Spectrum exploiting a specific domain of the CSC map optimally selected by a proposed criterion. The methodology is tested and validated on a wind turbine gearbox benchmarking dataset provided by the National Renewable Energy Laboratory (NREL), USA.},\n  keywords = {condition monitoring;fault diagnosis;gears;mechanical engineering computing;rolling bearings;signal processing;vibrations;wind turbines;bearing faults;cyclic spectral coherence;improved envelope spectrum;condition monitoring;advanced cyclostationary-based analysis;National Renewable Energy Laboratory;wind turbine gearbox benchmarking dataset;CSC map;novel CSC based methodology;strong diagnostic tool;advanced signal processing tools;gear excitation;early bearing damage;failure signs;machinery breakdowns;rolling element bearings;planetary gearbox;wind turbine drivetrains;turbines size;premature turbine component failures;wind energy experiences;complex systems;Wind turbines;Correlation;Shafts;Frequency modulation;Vibrations;Frequency estimation;Condition monitoring;Signal Processing;Cyclostationary Analysis;Cyclic Spectral Coherence;Condition Monitoring;Fault detection},\n  doi = {10.23919/EUSIPCO.2018.8553568},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439414.pdf},\n}\n\n
\n
\n\n\n
\n Wind energy experiences a significant growth during the last decades but the industry is still challenged by premature turbine component failures, which are quite expensive due to the increase of turbines size. The core of wind turbine drivetrains is a planetary gearbox and its rolling element bearings are often responsible for machinery breakdowns. The failure signs of an early bearing damage are usually weak compared to the gear excitation and are hardly detected. As a result there is a special need for advanced signal processing tools which can detect accurately bearing faults. Cyclic Spectral Coherence (CSC) appears to be a strong diagnostic tool but its interpretation is complicated for a non-expert. In this paper a novel CSC based methodology is proposed in order to extract an Improved Envelope Spectrum exploiting a specific domain of the CSC map optimally selected by a proposed criterion. The methodology is tested and validated on a wind turbine gearbox benchmarking dataset provided by the National Renewable Energy Laboratory (NREL), USA.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Jacobi Algorithm for Nonnegative Matrix Factorization with Transform Learning.\n \n \n \n \n\n\n \n Wendt, H.; Fagot, D.; and Févotte, C.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1062-1066, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"JacobiPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553569,\n  author = {H. Wendt and D. Fagot and C. Févotte},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Jacobi Algorithm for Nonnegative Matrix Factorization with Transform Learning},\n  year = {2018},\n  pages = {1062-1066},\n  abstract = {Nonnegative matrix factorization (NMF) is the state-of-the-art approach to unsupervised audio source separation. It relies on the factorization of a given short-time frequency transform into a dictionary of spectral patterns and an activation matrix. Recently, we introduced transform learning for NMF (TL-NMF), in which the short-time transform is learnt together with the nonnegative factors. We imposed the transform to be orthogonal likewise the usual Fourier or Cosine transform. TL-NMF yields an original non-convex optimization problem over the manifold of orthogonal matrices, for which we proposed a projected gradient descent algorithm in our previous work. In this contribution we describe a new Jacobi approach in which the orthogonal matrix is represented as a randomly chosen product of elementary Givens matrices. The new approach performs favorably as compared to the gradient approach, in particular in terms of robustness with respect to initialization, as illustrated with synthetic and audio decomposition experiments. Index Terms-Nonnegative matrix factorization (NMF), transform learning, single-channel source separation.},\n  keywords = {convex programming;learning (artificial intelligence);matrix algebra;matrix decomposition;source separation;transforms;nonnegative matrix factorization;nonconvex optimization problem;nonnegative factors;TL-NMF;activation matrix;short-time frequency;unsupervised audio source separation;state-of-the-art approach;Jacobi algorithm;transform learning;orthogonal matrix;projected gradient descent algorithm;orthogonal matrices;Jacobian matrices;Transforms;Signal processing algorithms;Intellectual property;Matrix decomposition;Linear programming;Signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553569},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437280.pdf},\n}\n\n
\n
\n\n\n
\n Nonnegative matrix factorization (NMF) is the state-of-the-art approach to unsupervised audio source separation. It relies on the factorization of a given short-time frequency transform into a dictionary of spectral patterns and an activation matrix. Recently, we introduced transform learning for NMF (TL-NMF), in which the short-time transform is learnt together with the nonnegative factors. We imposed the transform to be orthogonal likewise the usual Fourier or Cosine transform. TL-NMF yields an original non-convex optimization problem over the manifold of orthogonal matrices, for which we proposed a projected gradient descent algorithm in our previous work. In this contribution we describe a new Jacobi approach in which the orthogonal matrix is represented as a randomly chosen product of elementary Givens matrices. The new approach performs favorably as compared to the gradient approach, in particular in terms of robustness with respect to initialization, as illustrated with synthetic and audio decomposition experiments. Index Terms-Nonnegative matrix factorization (NMF), transform learning, single-channel source separation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n ADS-B Signal Signature Extraction for Intrusion Detection in the Air Traffic Surveillance System.\n \n \n \n \n\n\n \n Leonardi, M.; and Di Fausto, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2564-2568, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ADS-BPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553570,\n  author = {M. Leonardi and D. {Di Fausto}},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {ADS-B Signal Signature Extraction for Intrusion Detection in the Air Traffic Surveillance System},\n  year = {2018},\n  pages = {2564-2568},\n  abstract = {Automatic Dependent Surveillance-Broadcast (ADS-B) is a surveillance system used in Air Traffic Control. In this system aircraft transmit their own information (identity, position, velocity etc.) to any equipped listener for surveillance scope. ADS-B is based on a very simple protocol and doesn't provide any kind of authentication and encryption, making it vulnerable to many types of cyber-attacks. In the paper, it is proposed the use of airplane/transmitter RF level features to perform a test to distinguish legitimate messages from fake ones. The received signal features extraction process is described and an intrusion detection algorithm is proposed and evaluated by the use of real data. The results show that by a simple signal processing add-on on a classical (and low cost) ADS- B receivers, it is possible to detect if the ADS- B messages are sent using the expected hardware or not in the 85% of the case.},\n  keywords = {aircraft;aircraft communication;broadcast communication;feature extraction;radio transmitters;security of data;signal processing;surveillance;ADS-B signal signature extraction;authentication;encryption;cyber-attacks;intrusion detection algorithm;classical ADS- B receivers;air traffic control;protocol;signal processing;air traffic surveillance system;ADS-B surveillance system;automatic dependent surveillance-broadcast surveillance system;system aircraft transmission;airplane-transmitter RF level;received signal feature extraction proces;Aircraft;Feature extraction;Transmitters;Receivers;Time-frequency analysis;Signal processing;Surveillance;ADS-B;Security;Classification;Air Traffic Control;Fingerprinting},\n  doi = {10.23919/EUSIPCO.2018.8553570},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437099.pdf},\n}\n\n
\n
\n\n\n
\n Automatic Dependent Surveillance-Broadcast (ADS-B) is a surveillance system used in Air Traffic Control. In this system aircraft transmit their own information (identity, position, velocity etc.) to any equipped listener for surveillance scope. ADS-B is based on a very simple protocol and doesn't provide any kind of authentication and encryption, making it vulnerable to many types of cyber-attacks. In the paper, it is proposed the use of airplane/transmitter RF level features to perform a test to distinguish legitimate messages from fake ones. The received signal features extraction process is described and an intrusion detection algorithm is proposed and evaluated by the use of real data. The results show that by a simple signal processing add-on on a classical (and low cost) ADS- B receivers, it is possible to detect if the ADS- B messages are sent using the expected hardware or not in the 85% of the case.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Raw Multi-Channel Audio Source Separation using Multi- Resolution Convolutional Auto-Encoders.\n \n \n \n\n\n \n Grais, E. M.; Ward, D.; and Plumbley, M. D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1577-1581, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553571,\n  author = {E. M. Grais and D. Ward and M. D. Plumbley},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Raw Multi-Channel Audio Source Separation using Multi- Resolution Convolutional Auto-Encoders},\n  year = {2018},\n  pages = {1577-1581},\n  abstract = {Supervised multi-channel audio source separation requires extracting useful spectral, temporal, and spatial features from the mixed signals. the success of many existing systems is therefore largely dependent on the choice of features used for training. In this work, we introduce a novel multi-channel, multiresolution convolutional auto-encoder neural network that works on raw time-domain signals to determine appropriate multiresolution features for separating the singing-voice from stereo music. Our experimental results show that the proposed method can achieve multi-channel audio source separation without the need for hand-crafted features or any pre- or post-processing.},\n  keywords = {audio signal processing;convolution;feature extraction;feedforward neural nets;learning (artificial intelligence);neural nets;source separation;spatial features extraction;multiresolution features;raw multichannel audio source separation;raw time-domain signals;multiresolution convolutional auto-encoder neural network;novel multichannel;Feature extraction;Convolution;Signal resolution;Time-domain analysis;Source separation;Neural networks;Data mining},\n  doi = {10.23919/EUSIPCO.2018.8553571},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Supervised multi-channel audio source separation requires extracting useful spectral, temporal, and spatial features from the mixed signals. the success of many existing systems is therefore largely dependent on the choice of features used for training. In this work, we introduce a novel multi-channel, multiresolution convolutional auto-encoder neural network that works on raw time-domain signals to determine appropriate multiresolution features for separating the singing-voice from stereo music. Our experimental results show that the proposed method can achieve multi-channel audio source separation without the need for hand-crafted features or any pre- or post-processing.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On Multi-View Face Recognition Using Lytro Images.\n \n \n \n \n\n\n \n Chiesa, V.; and Dugelay, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2250-2254, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553572,\n  author = {V. Chiesa and J. Dugelay},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On Multi-View Face Recognition Using Lytro Images},\n  year = {2018},\n  pages = {2250-2254},\n  abstract = {In this work, a simple and efficient approach for recognizing faces from light field images, notably from Lytro IlLum camera, is proposed. The suggested method is based on light field images property of being rendered through a multiview representation. In the preliminary analysis, feature vectors extracted from different views of the same Lytro picture are proved different enough to provide complementary information beneficial for face recognition purpose. Starting from a set of multiple views for each data, face verification problem is tackled and results are compared with those achieved with classical 2D images simulated using a single view, i.e. the central one. Two experiments are described and, in both cases, the presented method shows superior performances than standard algorithms adopted by classical imaging sensors.},\n  keywords = {cameras;face recognition;feature extraction;image representation;image sensors;single view;classical imaging sensors;multiview face recognition;Lytro images;Lytro IlLum camera;light field images property;multiview representation;Lytro picture;complementary information;face verification problem;classical 2D images;feature vector extraction;Face;Cameras;Face recognition;Feature extraction;Databases;Signal processing algorithms;Standards;Multi-view;light field images;face recognition;Lytro camera},\n  doi = {10.23919/EUSIPCO.2018.8553572},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439738.pdf},\n}\n\n
\n
\n\n\n
\n In this work, a simple and efficient approach for recognizing faces from light field images, notably from Lytro IlLum camera, is proposed. The suggested method is based on light field images property of being rendered through a multiview representation. In the preliminary analysis, feature vectors extracted from different views of the same Lytro picture are proved different enough to provide complementary information beneficial for face recognition purpose. Starting from a set of multiple views for each data, face verification problem is tackled and results are compared with those achieved with classical 2D images simulated using a single view, i.e. the central one. Two experiments are described and, in both cases, the presented method shows superior performances than standard algorithms adopted by classical imaging sensors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n GPU-Optimised Low-Latency Online Search for Gravitational Waves from Binary Coalescences.\n \n \n \n \n\n\n \n Guo, X.; Chu, Q.; Du, Z.; and Went, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2638-2642, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"GPU-OptimisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553574,\n  author = {X. Guo and Q. Chu and Z. Du and L. Went},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {GPU-Optimised Low-Latency Online Search for Gravitational Waves from Binary Coalescences},\n  year = {2018},\n  pages = {2638-2642},\n  abstract = {Low-latency detection of gravitational waves (GWs) from compact stellar mergers is crucial to enable prompt followup electro-magnetic (EM) observations, as to probe different aspects of the merging process. The GW signal detection involves large computational efforts to search over the merger parameter space and Graphics Processing Unit (GPU) can play an important role to parallel the process. In this paper, Summed Parallel Infinite Impulse Response (SPIIR) GW detection pipeline is further optimized using recent GPU techniques to improve its throughput and reduce its latency. Two main computational bottlenecks have been studied: the SPIIR filtering and the coherent postprocessing which combines multiple GW detector outputs. In the filtering part, inefficient memory access is accelerated by exploiting temporal locality of input data, where the performance over previous implementation is improved by a factor of 2.5-3.5x on different GPUs. The post-processing part is improved by employing multiple strategies and a speedup of 12-25x is achieved. Once again, it is shown that GPUs can be very useful to tackle computational challenges in GW detection.},\n  keywords = {aerospace computing;binary stars;coprocessors;graphics processing units;gravitational wave detectors;signal detection;GPU-optimised low-latency online search;gravitational waves;binary coalescences;low-latency detection;compact stellar mergers;merging process;GW signal detection;merger parameter space;SPIIR filtering;multiple GW detector outputs;graphics processing unit;electro-magnetic observations;summed parallel infinite impulse response GW detection pipeline;Signal to noise ratio;Instruction sets;Detectors;Optimization;Pipelines;Graphics processing units;Registers},\n  doi = {10.23919/EUSIPCO.2018.8553574},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437104.pdf},\n}\n\n
\n
\n\n\n
\n Low-latency detection of gravitational waves (GWs) from compact stellar mergers is crucial to enable prompt followup electro-magnetic (EM) observations, as to probe different aspects of the merging process. The GW signal detection involves large computational efforts to search over the merger parameter space and Graphics Processing Unit (GPU) can play an important role to parallel the process. In this paper, Summed Parallel Infinite Impulse Response (SPIIR) GW detection pipeline is further optimized using recent GPU techniques to improve its throughput and reduce its latency. Two main computational bottlenecks have been studied: the SPIIR filtering and the coherent postprocessing which combines multiple GW detector outputs. In the filtering part, inefficient memory access is accelerated by exploiting temporal locality of input data, where the performance over previous implementation is improved by a factor of 2.5-3.5x on different GPUs. The post-processing part is improved by employing multiple strategies and a speedup of 12-25x is achieved. Once again, it is shown that GPUs can be very useful to tackle computational challenges in GW detection.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Sensing Matrix Sensitivity to Random Gaussian Perturbations in Compressed Sensing.\n \n \n \n \n\n\n \n Lavrenko, A.; Römer, F.; Del Galdo, G.; and Thomä, R. S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 583-587, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SensingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553575,\n  author = {A. Lavrenko and F. Römer and G. {Del Galdo} and R. S. Thomä},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Sensing Matrix Sensitivity to Random Gaussian Perturbations in Compressed Sensing},\n  year = {2018},\n  pages = {583-587},\n  abstract = {In compressed sensing, the choice of the sensing matrix plays a crucial role: it defines the required hardware effort and determines the achievable recovery performance. Recent studies indicate that by optimizing a sensing matrix, one can potentially improve system performance compared to random ensembles. In this work, we analyze the sensitivity of a sensing matrix design to random perturbations, e.g., caused by hardware imperfections, with respect to the total (average) matrix coherence. We derive an exact expression for the average deterioration of the total coherence in the presence of Gaussian perturbations as a function of the perturbations' variance and the sensing matrix itself. We then numerically evaluate the impact it has on the recovery performance.},\n  keywords = {compressed sensing;Gaussian processes;matrix algebra;numerical analysis;random processes;stochastic processes;total matrix coherence;random Gaussian perturbations;compressed sensing;sensing matrix sensitivity design;optimisation;numerical analysis;Sensors;Perturbation methods;Coherence;Optimized production technology;Sparse matrices;Atomic measurements;Europe;compressed sensing;sensing matrix;random perturbations;average coherence},\n  doi = {10.23919/EUSIPCO.2018.8553575},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437742.pdf},\n}\n\n
\n
\n\n\n
\n In compressed sensing, the choice of the sensing matrix plays a crucial role: it defines the required hardware effort and determines the achievable recovery performance. Recent studies indicate that by optimizing a sensing matrix, one can potentially improve system performance compared to random ensembles. In this work, we analyze the sensitivity of a sensing matrix design to random perturbations, e.g., caused by hardware imperfections, with respect to the total (average) matrix coherence. We derive an exact expression for the average deterioration of the total coherence in the presence of Gaussian perturbations as a function of the perturbations' variance and the sensing matrix itself. We then numerically evaluate the impact it has on the recovery performance.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Comparative Study on Univariate Forecasting Methods for Meteorological Time Series.\n \n \n \n \n\n\n \n Phan, T.; Caillault, É. P.; and Bigand, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2380-2384, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ComparativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553576,\n  author = {T. Phan and É. P. Caillault and A. Bigand},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Comparative Study on Univariate Forecasting Methods for Meteorological Time Series},\n  year = {2018},\n  pages = {2380-2384},\n  abstract = {Time series forecasting has an important role in many real applications in meteorology and environment to understand phenomena as climate change and to adapt monitoring strategy. This paper aims first to build a framework for forecasting meteorological univariate time series and then to carry out a performance comparison of different univariate models for forecasting task. Six algorithms are discussed: Single exponential smoothing (SES), Seasonal-naive (Snaive), Seasonal-ARIMA (SARIMA), Feed-Forward Neural Network (FFNN), Dynamic Time Warping-based Imputation (DTWBI), Bayesian Structural Time Series (BSTS). Four performance measures and various meteorological time series are used to determine a more customized method for forecasting. Through experiments results, FFNN method is well adapted to forecast meteorological univariate time series with seasonality and no trend in consideration of accuracy indices and DTWBI is more suitable as considering the shape and dynamics of forecast values.},\n  keywords = {feedforward neural nets;forecasting theory;meteorology;time series;dynamic time warping-based imputation;meteorological univariate time series forecasting;Bayesian structural time series;single exponential smoothing;univariate forecasting methods;forecast values;FFNN method;meteorological time series;forecasting task;Forecasting;Time series analysis;Predictive models;Task analysis;Bayes methods;Weather forecasting;Univariate time series forecasting;similarity measure;SARIMA;FFNN;BSTS;DTW},\n  doi = {10.23919/EUSIPCO.2018.8553576},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436132.pdf},\n}\n\n
\n
\n\n\n
\n Time series forecasting has an important role in many real applications in meteorology and environment to understand phenomena as climate change and to adapt monitoring strategy. This paper aims first to build a framework for forecasting meteorological univariate time series and then to carry out a performance comparison of different univariate models for forecasting task. Six algorithms are discussed: Single exponential smoothing (SES), Seasonal-naive (Snaive), Seasonal-ARIMA (SARIMA), Feed-Forward Neural Network (FFNN), Dynamic Time Warping-based Imputation (DTWBI), Bayesian Structural Time Series (BSTS). Four performance measures and various meteorological time series are used to determine a more customized method for forecasting. Through experiments results, FFNN method is well adapted to forecast meteorological univariate time series with seasonality and no trend in consideration of accuracy indices and DTWBI is more suitable as considering the shape and dynamics of forecast values.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Dedicated Beam-based Channel Training Technique for Millimeter Wave Communications with high Mobility.\n \n \n \n \n\n\n \n Bae, J.; Lim, S. H.; Yoo, J. H.; Choi, J. W.; and Shim, B.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1830-1834, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"DedicatedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553578,\n  author = {J. Bae and S. H. Lim and J. H. Yoo and J. W. Choi and B. Shim},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Dedicated Beam-based Channel Training Technique for Millimeter Wave Communications with high Mobility},\n  year = {2018},\n  pages = {1830-1834},\n  abstract = {In this paper, we propose a new beam training framework to cope with mobility scenarios in millimeter wave communications. When a position of the mobile changes, the base-station needs to perform beam training frequently to track the time-varying channel, which leads to significant training overhead in radio resources. In order to alleviate this problem, we propose a “dedicated beam training” which serves only users under high mobility. Combined with conventional common beam training, the proposed dedicated beam training can allow the high mobility users to acquire channels with a small number of training beams exploiting the location information of the target user. The optimal selection of the training beams is formulated such that the lower bound of the angle of departure (AoD) estimate is minimized over the beam codebook indices given the estimate of the previous AoD state. Our numerical evaluation demonstrates that the proposed beam training scheme can maintain good channel estimation performance with less training overhead than the conventional beam training protocol.},\n  keywords = {protocols;time-varying channels;wireless channels;dedicated beam-based channel training technique;millimeter wave communications;mobility scenarios;mobile changes;time-varying channel;significant training overhead;dedicated beam training;conventional common beam training;high mobility users;training beams;beam codebook indices;conventional beam training protocol;channel estimation performance;angle of departure;numerical evaluation;Training;Channel estimation;Array signal processing;Europe;Millimeter wave communication;Millimeter wave technology},\n  doi = {10.23919/EUSIPCO.2018.8553578},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437609.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we propose a new beam training framework to cope with mobility scenarios in millimeter wave communications. When a position of the mobile changes, the base-station needs to perform beam training frequently to track the time-varying channel, which leads to significant training overhead in radio resources. In order to alleviate this problem, we propose a “dedicated beam training” which serves only users under high mobility. Combined with conventional common beam training, the proposed dedicated beam training can allow the high mobility users to acquire channels with a small number of training beams exploiting the location information of the target user. The optimal selection of the training beams is formulated such that the lower bound of the angle of departure (AoD) estimate is minimized over the beam codebook indices given the estimate of the previous AoD state. Our numerical evaluation demonstrates that the proposed beam training scheme can maintain good channel estimation performance with less training overhead than the conventional beam training protocol.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Shearlet-based Loop Filter.\n \n \n \n \n\n\n \n Erfurt, J.; Lim, W.; Schwarz, H.; Marpe, D.; and Wiegand, T.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 141-145, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Shearlet-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553579,\n  author = {J. Erfurt and W. Lim and H. Schwarz and D. Marpe and T. Wiegand},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Shearlet-based Loop Filter},\n  year = {2018},\n  pages = {141-145},\n  abstract = {In video coding, in-loop filtering has attracted attention due to its increasing coding performances. In this paper the shearlet-based loop filter is proposed using a sparsifying transform, the shearlet transform, which can identify the important structures of natural images such as edges in the sparse transform domain. This allows for separating efficiently the important information from noise components. Our novel approach for in-loop filtering is to apply a shearlet transform to the decoded image, separating important structures from noise and perform an inverse shearlet transform combined with Wiener filtering. This effectively removes compression artefacts due to quantization noise and keeps the important features of the original image. Simulation results show that our shearlet based loop filter can improve the state-of-the-art video coding standard HEVC through up to 10.5% bit rate reduction along with improved subjective visual quality.},\n  keywords = {transforms;video coding;Wiener filters;video coding;decoded image;inverse shearlet transform;shearlet based loop filter;Wiener filtering;inverse shearlet;in-loop filtering;Transforms;Quantization (signal);Image reconstruction;Video coding;Image coding;Encoding;Standards;shearlets;sparsity;in-loop filtering;Wiener filtering;classification},\n  doi = {10.23919/EUSIPCO.2018.8553579},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436261.pdf},\n}\n\n
\n
\n\n\n
\n In video coding, in-loop filtering has attracted attention due to its increasing coding performances. In this paper the shearlet-based loop filter is proposed using a sparsifying transform, the shearlet transform, which can identify the important structures of natural images such as edges in the sparse transform domain. This allows for separating efficiently the important information from noise components. Our novel approach for in-loop filtering is to apply a shearlet transform to the decoded image, separating important structures from noise and perform an inverse shearlet transform combined with Wiener filtering. This effectively removes compression artefacts due to quantization noise and keeps the important features of the original image. Simulation results show that our shearlet based loop filter can improve the state-of-the-art video coding standard HEVC through up to 10.5% bit rate reduction along with improved subjective visual quality.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Quantitative and Binary Steganalysis in JPEG: A Comparative Study.\n \n \n \n \n\n\n \n Zakaria, A.; Chaumont, M.; and Subsol, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1422-1426, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"QuantitativePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553580,\n  author = {A. Zakaria and M. Chaumont and G. Subsol},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Quantitative and Binary Steganalysis in JPEG: A Comparative Study},\n  year = {2018},\n  pages = {1422-1426},\n  abstract = {We consider the problem of steganalysis, in which Eve (the steganalyst) aims to identify a steganogra-pher, Alice who sends images through a network. We can also hypothesise that Eve does not know how many bits Alice embed in an image. In this paper, we investigate two different steganalysis scenarios: Binary Steganalysis and Quantitative Steganalysis. We compare two classical steganalysis algorithms from the state-of-the-art: the QS algorithm and the GLRT-Ensemble Classifier, with features extracted from JPEG images obtained from BOSSbase 1.01. As their outputs are different, we propose a methodology to compare them. Numerical results with a state-of-the-art Content Adaptive Embedding Scheme and a Rich Model show that the approach of the GLRT-ensemble is better than the QS approach when doing Binary Steganalysis but worse when doing Quantitative Steganalysis.},\n  keywords = {adaptive signal processing;feature extraction;image classification;steganography;Binary Steganalysis;Quantitative Steganalysis;classical steganalysis algorithms;JPEG images;Content Adaptive Embedding Scheme;Rich Model;GLRT-ensemble classifier;steganographer;Payloads;Signal processing algorithms;Testing;Training;Transform coding;Prediction algorithms;Feature extraction;Steganography;Quantitative Steganalysis;Binary Steganalysis;Multi-class Steganalysis;JPEG},\n  doi = {10.23919/EUSIPCO.2018.8553580},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438285.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of steganalysis, in which Eve (the steganalyst) aims to identify a steganogra-pher, Alice who sends images through a network. We can also hypothesise that Eve does not know how many bits Alice embed in an image. In this paper, we investigate two different steganalysis scenarios: Binary Steganalysis and Quantitative Steganalysis. We compare two classical steganalysis algorithms from the state-of-the-art: the QS algorithm and the GLRT-Ensemble Classifier, with features extracted from JPEG images obtained from BOSSbase 1.01. As their outputs are different, we propose a methodology to compare them. Numerical results with a state-of-the-art Content Adaptive Embedding Scheme and a Rich Model show that the approach of the GLRT-ensemble is better than the QS approach when doing Binary Steganalysis but worse when doing Quantitative Steganalysis.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Camera-based Image Forgery Localization using Convolutional Neural Networks.\n \n \n \n \n\n\n \n Cozzolino, D.; and Verdoliva, L.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1372-1376, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Camera-basedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553581,\n  author = {D. Cozzolino and L. Verdoliva},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Camera-based Image Forgery Localization using Convolutional Neural Networks},\n  year = {2018},\n  pages = {1372-1376},\n  abstract = {Camera fingerprints are precious tools for a number of image forensics tasks. A well-known example is the photo response non-uniformity (PRNU) noise pattern, a powerful device fingerprint. Here, to address the image forgery localization problem, we rely on noiseprint, a recently proposed CNN-based camera model fingerprint. The CNN is trained to minimize the distance between same-model patches, and maximize the distance otherwise. As a result, the noiseprint accounts for model-related artifacts just like the PRNU accounts for device-related nonuniformities. However, unlike the PRNU, it is only mildly affected by residuals of high-level scene content. The experiments show that the proposed noiseprint-based forgery localization method improves over the PRNU-based reference.},\n  keywords = {cameras;feature extraction;feedforward neural nets;fingerprint identification;image forensics;image sensors;convolutional neural networks;camera fingerprints;image forensics tasks;photo response nonuniformity noise pattern;localization problem;CNN;noiseprint accounts;model-related artifacts;device-related nonuniformities;localization method;PRNU-based reference;camera-based image forgery localization;device fingerprint;PRNU;noiseprint-based forgery localization;high-level scene content;Cameras;Forgery;Training;Task analysis;Computational modeling;Forensics;Noise reduction;Image forensics;PRNU;convolutional neural networks},\n  doi = {10.23919/EUSIPCO.2018.8553581},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439441.pdf},\n}\n\n
\n
\n\n\n
\n Camera fingerprints are precious tools for a number of image forensics tasks. A well-known example is the photo response non-uniformity (PRNU) noise pattern, a powerful device fingerprint. Here, to address the image forgery localization problem, we rely on noiseprint, a recently proposed CNN-based camera model fingerprint. The CNN is trained to minimize the distance between same-model patches, and maximize the distance otherwise. As a result, the noiseprint accounts for model-related artifacts just like the PRNU accounts for device-related nonuniformities. However, unlike the PRNU, it is only mildly affected by residuals of high-level scene content. The experiments show that the proposed noiseprint-based forgery localization method improves over the PRNU-based reference.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Asymmetric Supercardioid Beamforming Using Circular Microphone Arrays.\n \n \n \n \n\n\n \n Buchris, Y.; Cohen, I.; and Benesty, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 627-631, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AsymmetricPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553582,\n  author = {Y. Buchris and I. Cohen and J. Benesty},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Asymmetric Supercardioid Beamforming Using Circular Microphone Arrays},\n  year = {2018},\n  pages = {627-631},\n  abstract = {We present a joint-diagonalization based approach for a closed-form solution of the asymmetric supercardioid, implemented with circular differential microphone arrays. These arrays are characterized as compact frequency-invariant superdirective beamformers, allowing perfect steering for all azimuthal directions. Experimental results show that the asymmetric supercardioid yields superior performance in terms of white noise gain, directivity factor, and front-to-back ratio, when additional directional attenuation constraints are imposed in order to suppress interfering signals.},\n  keywords = {array signal processing;matrix algebra;microphone arrays;white noise;asymmetric supercardioid;directional attenuation constraints;compact frequency-invariant superdirective beamformers;compact frequency-invariant superdirective beamformers;closed-form solution;joint-diagonalization based approach;circular microphone arrays;directivity factor;azimuthal directions;perfect steering;circular differential microphone arrays;Microphone arrays;Multiaccess communication;Geometry;Array signal processing;Europe;Circular differential microphone arrays (CDMAs);asymmetric beampatterns;supercardioid},\n  doi = {10.23919/EUSIPCO.2018.8553582},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436042.pdf},\n}\n\n
\n
\n\n\n
\n We present a joint-diagonalization based approach for a closed-form solution of the asymmetric supercardioid, implemented with circular differential microphone arrays. These arrays are characterized as compact frequency-invariant superdirective beamformers, allowing perfect steering for all azimuthal directions. Experimental results show that the asymmetric supercardioid yields superior performance in terms of white noise gain, directivity factor, and front-to-back ratio, when additional directional attenuation constraints are imposed in order to suppress interfering signals.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Robust Stochastic Maximum Likelihood Algorithm for DOA Estimation of Acoustic Sources in the Spherical Harmonic Domain.\n \n \n \n \n\n\n \n Lolaee, H.; and Akhaee, M. A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 351-355, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"RobustPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553583,\n  author = {H. Lolaee and M. A. Akhaee},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Robust Stochastic Maximum Likelihood Algorithm for DOA Estimation of Acoustic Sources in the Spherical Harmonic Domain},\n  year = {2018},\n  pages = {351-355},\n  abstract = {The direction of arrival (DOA) estimation of the sound sources has been a popular signal processing research topic due to its widespread applications. Using spherical microphone array, DOA estimation can be applied in the spherical harmonic (SH) domain without any spatial ambiguity. However, the environment reverberation and noise can degrade the estimation performance. In this paper, we propose a novel iterative stochastic maximum likelihood (ML) algorithm for DOA estimation of multiple sound sources in the presence of spatially nonuniform noise in the SH domain. The main idea of the proposed algorithm is considering the general model of the received signal in the SH domain. We reduce the complexity of the ML estimation by breaking it down to two separate problems: noise parameters and DOA estimation problems. Simulation results indicate that the proposed algorithm improves the robustness of estimation, i.e, the root mean square error, by at least 7dB compared to the recent methods in the reverberant and noisy environments.},\n  keywords = {acoustic signal processing;array signal processing;direction-of-arrival estimation;iterative methods;maximum likelihood estimation;microphone arrays;reverberation;stochastic processes;ML estimation;estimation problems;robust stochastic maximum likelihood algorithm;DOA estimation;acoustic sources;signal processing;SH domain;spatially nonuniform noise;multiple sound sources;estimation performance;environment reverberation;spherical microphone array;arrival estimation;spherical harmonic domain;Direction-of-arrival estimation;Maximum likelihood estimation;Microphones;Harmonic analysis;Signal processing algorithms;Array signal processing;Direction of Arrival Estimation;Spherical Microphone Array;Spherical Harmonics},\n  doi = {10.23919/EUSIPCO.2018.8553583},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438936.pdf},\n}\n\n
\n
\n\n\n
\n The direction of arrival (DOA) estimation of the sound sources has been a popular signal processing research topic due to its widespread applications. Using spherical microphone array, DOA estimation can be applied in the spherical harmonic (SH) domain without any spatial ambiguity. However, the environment reverberation and noise can degrade the estimation performance. In this paper, we propose a novel iterative stochastic maximum likelihood (ML) algorithm for DOA estimation of multiple sound sources in the presence of spatially nonuniform noise in the SH domain. The main idea of the proposed algorithm is considering the general model of the received signal in the SH domain. We reduce the complexity of the ML estimation by breaking it down to two separate problems: noise parameters and DOA estimation problems. Simulation results indicate that the proposed algorithm improves the robustness of estimation, i.e, the root mean square error, by at least 7dB compared to the recent methods in the reverberant and noisy environments.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Unsupervised Singing Voice Separation Based on Robust Principal Component Analysis Exploiting Rank-1 Constraint.\n \n \n \n \n\n\n \n Li, F.; and Akagi, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1920-1924, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"UnsupervisedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553584,\n  author = {F. Li and M. Akagi},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Unsupervised Singing Voice Separation Based on Robust Principal Component Analysis Exploiting Rank-1 Constraint},\n  year = {2018},\n  pages = {1920-1924},\n  abstract = {In this paper, we address the singing voice separation problem and propose a novel unsupervised approach based on robust principal component analysis (RPCA) exploiting rank-1 constraint (CRPCA). RPCA is a recently proposed singing voice separation algorithm that can separate singing voice from the monaural recordings. Although RPCA has been successfully applied to singing voice separation task, it ignores the different characteristic values of singular value decomposition and computational complexity to minimize the nuclear norm for separating singing voice. Since rank-l constraint in the background music, as the background music has a large variation in richness than singing voice among different songs. Furthermore, rank-1 constraint can utilize a prior target rank to separate singing voice and background music from the mixture music signal. Accordingly, the proposed CRPCA method utilizes rank-1 constraint minimization of singular values in RPCA instead of minimizing the whole nuclear norm, which can not only describe the different values of singular value decomposition, but also the computation complexity is reduced. The experiment evaluation results reveal that CRPCA can achieve better separation performance than the previous methods, especially with regard to use time frequency masking on ccMixter and DSD100 datasets. In addition, the running time on CRPCA is shorter than others under the same conditions.},\n  keywords = {audio signal processing;blind source separation;computational complexity;matrix decomposition;music;principal component analysis;singular value decomposition;source separation;speech processing;unsupervised singing voice separation;singing voice separation problem;unsupervised approach;voice separation task;singular value decomposition;computational complexity;nuclear norm;separating singing voice;rank-l constraint;background music;prior target rank;mixture music signal;CRPCA method utilizes rank-1 constraint minimization;singular values;RPCA instead;computation complexity;separation performance;robust principal component analysis;characteristic values;singing voice separation algorithm;Sparse matrices;Multiple signal classification;Matrix decomposition;Spectrogram;Computational complexity;Time-frequency analysis;Minimization},\n  doi = {10.23919/EUSIPCO.2018.8553584},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439440.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address the singing voice separation problem and propose a novel unsupervised approach based on robust principal component analysis (RPCA) exploiting rank-1 constraint (CRPCA). RPCA is a recently proposed singing voice separation algorithm that can separate singing voice from the monaural recordings. Although RPCA has been successfully applied to singing voice separation task, it ignores the different characteristic values of singular value decomposition and computational complexity to minimize the nuclear norm for separating singing voice. Since rank-l constraint in the background music, as the background music has a large variation in richness than singing voice among different songs. Furthermore, rank-1 constraint can utilize a prior target rank to separate singing voice and background music from the mixture music signal. Accordingly, the proposed CRPCA method utilizes rank-1 constraint minimization of singular values in RPCA instead of minimizing the whole nuclear norm, which can not only describe the different values of singular value decomposition, but also the computation complexity is reduced. The experiment evaluation results reveal that CRPCA can achieve better separation performance than the previous methods, especially with regard to use time frequency masking on ccMixter and DSD100 datasets. In addition, the running time on CRPCA is shorter than others under the same conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n END-to-END Photopleth YsmographY (PPG) Based Biometric Authentication by Using Convolutional Neural Networks.\n \n \n \n\n\n \n Luque, J.; Cortès, G.; Segura, C.; Maravilla, A.; Esteban, J.; and Fabregat, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 538-542, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553585,\n  author = {J. Luque and G. Cortès and C. Segura and A. Maravilla and J. Esteban and J. Fabregat},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {END-to-END Photopleth YsmographY (PPG) Based Biometric Authentication by Using Convolutional Neural Networks},\n  year = {2018},\n  pages = {538-542},\n  abstract = {Whilst research efforts have traditionally focused on Electrocardiographic (ECG) signals and handcrafted features as potential biometric traits, few works have explored systems based on the raw pho-toplethysmogram (PPG) signal. This work proposes an end-to-end architecture to offer biometric authentication using PPG biosensors through Convolutional Networks. We provide an evaluation of the performance of our approach in two different databases: Troika and PulseID, the latter a publicly available database specifically collected by the authors for such a purpose. Our verification approach through convolutional network based models and using raw PPG signals appears to be viable in current monitoring procedures within e-health and fitness environments showing a remarkable potential as a biometry. The approach tested on a verification fashion, on trials lasting one second, achieved an AUC of 78.2% and 83.2%, averaged among target subjects, on PulseID and Troika datasets respectively. Our experimental results on previous small datasets support the usefulness of PPG extracted biomarkers as viable traits for multi-biometric or standalone biometrics. Furthermore, the approach results in a low input throughput and complexity that allows for a continuous authentication in real-world scenarios. Nevertheless, the reported experiments also suggest that further research is necessary to account for and understand sources of variability found in some subjects.},\n  keywords = {authorisation;biometrics (access control);biosensors;convolution;electrocardiography;feedforward neural nets;medical signal processing;end-to-end architecture;biometric authentication;PPG biosensors;publicly available database;convolutional network based models;multibiometric;standalone biometrics;continuous authentication;electrocardiographic signals;monitoring procedures;convolutional neural networks;END-to-END Photoplethysmography;raw photoplethysmogram signal;biometric traits;e-health;fitness environments;PulseID;Troika datasets;Convolution;Electrocardiography;Databases;Sensors;Biomarkers;Feature extraction;Computer architecture;photoplethysmogram signal;ppg;biometric authentication;biometric verification;convolutional neural networks},\n  doi = {10.23919/EUSIPCO.2018.8553585},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Whilst research efforts have traditionally focused on Electrocardiographic (ECG) signals and handcrafted features as potential biometric traits, few works have explored systems based on the raw pho-toplethysmogram (PPG) signal. This work proposes an end-to-end architecture to offer biometric authentication using PPG biosensors through Convolutional Networks. We provide an evaluation of the performance of our approach in two different databases: Troika and PulseID, the latter a publicly available database specifically collected by the authors for such a purpose. Our verification approach through convolutional network based models and using raw PPG signals appears to be viable in current monitoring procedures within e-health and fitness environments showing a remarkable potential as a biometry. The approach tested on a verification fashion, on trials lasting one second, achieved an AUC of 78.2% and 83.2%, averaged among target subjects, on PulseID and Troika datasets respectively. Our experimental results on previous small datasets support the usefulness of PPG extracted biomarkers as viable traits for multi-biometric or standalone biometrics. Furthermore, the approach results in a low input throughput and complexity that allows for a continuous authentication in real-world scenarios. Nevertheless, the reported experiments also suggest that further research is necessary to account for and understand sources of variability found in some subjects.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Performance of a Third-Order Volterra Mvdr Beamformer in the Presence of Non-Gaussian and/or Non-Circular Interference.\n \n \n \n \n\n\n \n Chevalier, P.; Delmas, J. P.; and Sadok, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 807-811, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PerformancePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553586,\n  author = {P. Chevalier and J. P. Delmas and M. Sadok},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Performance of a Third-Order Volterra Mvdr Beamformer in the Presence of Non-Gaussian and/or Non-Circular Interference},\n  year = {2018},\n  pages = {807-811},\n  abstract = {Linear beamformers are optimal, in a mean square (MS) sense, when the signal of interest (SOI) and observations are jointly Gaussian and circular. When the SOI and observations are zero-mean, jointly Gaussian and non-circular, optimal beamformers become widely linear (WL). They become non-linear with a structure depending on the unknown joint probability distribution of the SOI and observations when the latter are jointly non-Gaussian, assumption which is very common in radiocommunications. In this context, a third-order Volterra minimum variance distortionless response (MVDR) beamformer has been introduced recently for the reception of a SOI, whose waveform is unknown, but whose steering vector is known, corrupted by non-Gaussian and potentially non-circular interference, omnipresent in practical situations. However its statistical performance has not yet been analyzed. The aim of this paper is twofold. We first introduce an equivalent generalized sidelobe canceller (GSC) structure of this beamformer and then, we present an analytical performance analysis of the latter in the presence of one interference. This allows us to quantify the improvement of the performance with respect to the linear and WL MVDR beamformers.},\n  keywords = {array signal processing;interference (signal);interference suppression;probability;radiocommunication;nonGaussian interference;third-order Volterra minimum variance distortionless response beamformer;unknown joint probability distribution;observations;SOI;mean square;linear beamformers;third-order Volterra MVDR beamformer;WL MVDR beamformers;analytical performance analysis;equivalent generalized sidelobe canceller structure;statistical performance;potentially noncircular interference;Interference;Signal to noise ratio;Europe;Performance analysis;Electric potential;Sensor arrays;Beamformer;non-circular;non-Gaussian;higher order;interferences;MVDR;Volterra;widely non linear;widely linear;third order;fourth order;sixth order},\n  doi = {10.23919/EUSIPCO.2018.8553586},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570430863.pdf},\n}\n\n
\n
\n\n\n
\n Linear beamformers are optimal, in a mean square (MS) sense, when the signal of interest (SOI) and observations are jointly Gaussian and circular. When the SOI and observations are zero-mean, jointly Gaussian and non-circular, optimal beamformers become widely linear (WL). They become non-linear with a structure depending on the unknown joint probability distribution of the SOI and observations when the latter are jointly non-Gaussian, assumption which is very common in radiocommunications. In this context, a third-order Volterra minimum variance distortionless response (MVDR) beamformer has been introduced recently for the reception of a SOI, whose waveform is unknown, but whose steering vector is known, corrupted by non-Gaussian and potentially non-circular interference, omnipresent in practical situations. However its statistical performance has not yet been analyzed. The aim of this paper is twofold. We first introduce an equivalent generalized sidelobe canceller (GSC) structure of this beamformer and then, we present an analytical performance analysis of the latter in the presence of one interference. This allows us to quantify the improvement of the performance with respect to the linear and WL MVDR beamformers.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Empirical Evaluation of Short-Term Memory Retention Using Different High-density EEG Based Brain Connectivity Measures.\n \n \n \n \n\n\n \n Daniel, R.; Pandey, V.; Bhat, K. R.; Rao, A. K.; Singh, R.; and Chandra, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1387-1391, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553587,\n  author = {R. Daniel and V. Pandey and K. R. Bhat and A. K. Rao and R. Singh and S. Chandra},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Empirical Evaluation of Short-Term Memory Retention Using Different High-density EEG Based Brain Connectivity Measures},\n  year = {2018},\n  pages = {1387-1391},\n  abstract = {It is very vital to identify the variations in the brain activations and visualize the extent of interaction between brain areas to come up with logical interpretations regarding neuronal dynamics during higher-order cognitive functioning. Most cognitive functions are based on interactions between neuronal assemblies distributed across different cerebral regions. In this paper, we evaluate two traditional methods (Squared Coherence Spectrum (SCS) and Directed Transfer Function (DTF)) and one novel approach based on information theory (Phase Transfer Entropy (PTE)) based on the extent to which they can depict the information flow between distant brain regions during a standard visual short-term memory task. Results revealed that PTE was able to depict the performance and visualize the information flow better compared to the traditional techniques. These results demonstrate the applicability of functional brain connectivity measures in determining and visualizing higher-order cognitive functions. We plan to extend the use of these measures in assessing the neural underpinnings of executive functions as well.},\n  keywords = {cognition;electroencephalography;entropy;medical signal processing;neurophysiology;empirical evaluation;short-term memory retention;brain activations;brain areas;logical interpretations;neuronal dynamics;neuronal assemblies;traditional methods;Directed Transfer Function;information theory;information flow;distant brain regions;short-term memory task;functional brain connectivity measures;executive functions;cerebral regions;squared coherence spectrum;high-density EEG based brain connectivity measures;phase transfer entropy;higher-order cognitive functions;standard visual short-term memory task;Electroencephalography;Task analysis;Brain modeling;Mathematical model;Visualization;Coherence;Frequency-domain analysis;Functional Connectivity;Squared Coherence Spectrum;Directed Transfer Function;Phase Transfer Entropy},\n  doi = {10.23919/EUSIPCO.2018.8553587},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437955.pdf},\n}\n\n
\n
\n\n\n
\n It is very vital to identify the variations in the brain activations and visualize the extent of interaction between brain areas to come up with logical interpretations regarding neuronal dynamics during higher-order cognitive functioning. Most cognitive functions are based on interactions between neuronal assemblies distributed across different cerebral regions. In this paper, we evaluate two traditional methods (Squared Coherence Spectrum (SCS) and Directed Transfer Function (DTF)) and one novel approach based on information theory (Phase Transfer Entropy (PTE)) based on the extent to which they can depict the information flow between distant brain regions during a standard visual short-term memory task. Results revealed that PTE was able to depict the performance and visualize the information flow better compared to the traditional techniques. These results demonstrate the applicability of functional brain connectivity measures in determining and visualizing higher-order cognitive functions. We plan to extend the use of these measures in assessing the neural underpinnings of executive functions as well.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A Resolution Enhancement Technique for Ultrafast Coded Medical Ultrasound.\n \n \n \n \n\n\n \n Bujoreanu, D.; Benane, Y. M.; Liebzott, H.; Nicolas, B.; Basset, O.; and Friboulet, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 76-80, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553588,\n  author = {D. Bujoreanu and Y. M. Benane and H. Liebzott and B. Nicolas and O. Basset and D. Friboulet},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A Resolution Enhancement Technique for Ultrafast Coded Medical Ultrasound},\n  year = {2018},\n  pages = {76-80},\n  abstract = {In the quest for faster ultrasound image acquisition rate, low echo signal to noise ratio is often an issue. Binary Phase Shift Keyed (BPSK) Golay codes have been implemented in a large number of imaging methods, and their ability to increase the image quality is already proven. In this paper we propose an improvement of the BPSK modulation, where the effect of the narrow-band ultrasound probe, used for acquisition, is compensated. The optimized excitation signals are implemented in a Plane Wave Compounding (PWC) imaging approach. Simulation and experimental results are presented. Numerical studies show 41% improvement of axial resolution and bandwidth, over the classical BPSK modulated Golay codes. Experimental acquisitions on cyst phantom show an improvement of image resolution of 32%. The method is also compared to classical pulse (small wave packets) emission and 25% boost of resolution is achieved for a 6dB higher echo signal to noise ratio. The experimental results obtained using UlaOp 256 prove the feasibility of the method on a research scanner while the theoretical formulation shows that the optimization of the excitation signals can be applied to any binary sequence and does not depend on the emission/reception beamforming.},\n  keywords = {biomedical ultrasonics;Golay codes;image coding;image enhancement;image resolution;medical image processing;numerical analysis;phantoms;phase shift keying;low echo signal-noise ratio;binary phase shift keyed Golay codes;cyst phantom;faster ultrasound image acquisition rate;ultrafast coded medical ultrasound;resolution enhancement technique;small wave packets;classical pulse emission;image resolution;experimental acquisitions;classical BPSK modulated Golay codes;axial resolution;numerical studies;Plane Wave Compounding imaging approach;optimized excitation signals;narrow-band ultrasound probe;image quality;imaging methods;Ultrasonic imaging;Probes;Image resolution;Signal resolution;Binary phase shift keying;Imaging;Image coding;Golay sequences;BPSK;Resolution enhancement compression;plane wave imaging},\n  doi = {10.23919/EUSIPCO.2018.8553588},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438059.pdf},\n}\n\n
\n
\n\n\n
\n In the quest for faster ultrasound image acquisition rate, low echo signal to noise ratio is often an issue. Binary Phase Shift Keyed (BPSK) Golay codes have been implemented in a large number of imaging methods, and their ability to increase the image quality is already proven. In this paper we propose an improvement of the BPSK modulation, where the effect of the narrow-band ultrasound probe, used for acquisition, is compensated. The optimized excitation signals are implemented in a Plane Wave Compounding (PWC) imaging approach. Simulation and experimental results are presented. Numerical studies show 41% improvement of axial resolution and bandwidth, over the classical BPSK modulated Golay codes. Experimental acquisitions on cyst phantom show an improvement of image resolution of 32%. The method is also compared to classical pulse (small wave packets) emission and 25% boost of resolution is achieved for a 6dB higher echo signal to noise ratio. The experimental results obtained using UlaOp 256 prove the feasibility of the method on a research scanner while the theoretical formulation shows that the optimization of the excitation signals can be applied to any binary sequence and does not depend on the emission/reception beamforming.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lenslet Light Field Imaging Scalable Coding.\n \n \n \n \n\n\n \n Garrote, J.; Brites, C.; Ascenso, J.; and Pereira, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2150-2154, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LensletPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553589,\n  author = {J. Garrote and C. Brites and J. Ascenso and F. Pereira},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Lenslet Light Field Imaging Scalable Coding},\n  year = {2018},\n  pages = {2150-2154},\n  abstract = {Light fields have emerged as one of the most promising 3D representation formats, enabling a richer and more immersive representation of a visual scene. The lenslet light field acquisition approach consists in placing an array of micro-lenses between the camera main lens and the photosensor to allow capturing both the intensity and the direction of the light rays. This type of representation format offers new interaction possibilities with the visual content, notably a posteriori refocusing and visualization of different perspectives of the visual scene. However, this representation model is associated to very large amounts of data, thus requiring efficient coding solutions in order applications involving storage and transmission may be deployed. This paper proposes a novel lenslet light field imaging scalable coding solution adopting a wavelet-based approach, able to offer view, quality and spatial scalabilities, to meet the characteristics of multiple types of displays, transmission channels and user needs. The performance results show that the proposed coding solution performs better than alternative scalable coding solutions, notably JPEG 2000.},\n  keywords = {cameras;data compression;image coding;image representation;microlenses;stereo image processing;wavelet transforms;light rays detection;JPEG 2000;transmission channels;photosensor;lenslet light field acquisition;scalable coding solutions;lenslet light field imaging;immersive representation;3D representation formats;spatial scalabilities;wavelet-based approach;posteriori refocusing;visual content;representation format;camera main lens;microlenses;Encoding;Transform coding;Standards;Scalability;Discrete wavelet transforms;lenslet light field;sub-aperture image;disparity estimation and compensation;scalability;JPEG 2000},\n  doi = {10.23919/EUSIPCO.2018.8553589},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570433530.pdf},\n}\n\n
\n
\n\n\n
\n Light fields have emerged as one of the most promising 3D representation formats, enabling a richer and more immersive representation of a visual scene. The lenslet light field acquisition approach consists in placing an array of micro-lenses between the camera main lens and the photosensor to allow capturing both the intensity and the direction of the light rays. This type of representation format offers new interaction possibilities with the visual content, notably a posteriori refocusing and visualization of different perspectives of the visual scene. However, this representation model is associated to very large amounts of data, thus requiring efficient coding solutions in order applications involving storage and transmission may be deployed. This paper proposes a novel lenslet light field imaging scalable coding solution adopting a wavelet-based approach, able to offer view, quality and spatial scalabilities, to meet the characteristics of multiple types of displays, transmission channels and user needs. The performance results show that the proposed coding solution performs better than alternative scalable coding solutions, notably JPEG 2000.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Perturbation Analysis of Root-MUSIC-Type Methods for Blind Network-Assisted Diversity Multiple Access.\n \n \n \n \n\n\n \n Akl, N.; and Tewfik, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1820-1824, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"PerturbationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553590,\n  author = {N. Akl and A. Tewfik},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Perturbation Analysis of Root-MUSIC-Type Methods for Blind Network-Assisted Diversity Multiple Access},\n  year = {2018},\n  pages = {1820-1824},\n  abstract = {We perform a first-order perturbation analysis of a root-MUSIC-type method for resolving collisions in the context of blind network-assisted diversity multiple access (BNDMA). Polynomial roots are computed as an intermediate step of the root-MUSIC algorithm for the purpose of blindly identifying the set of transmitters involved in a collision. We derive expressions for the individual and joint distributions of the noise-induced angular shifts of the computed roots. The expressions are analyzed in relation to the signal-to-noise ratio and the number of packet retransmissions made to resolve a collision. Results are verified numerically.},\n  keywords = {polynomials;signal classification;signal resolution;first-order perturbation analysis;root-MUSIC-type method;blind network-assisted diversity multiple access;polynomial roots;root-MUSIC algorithm;computed roots;BNDMA;noise-induced angular shifts;signal-to-noise ratio;packet retransmissions;Perturbation methods;Direction-of-arrival estimation;Covariance matrices;Transmitters;Estimation;Antenna arrays;Index - perturbation analysis;root-MUSIC;collision resolution;network-assisted diversity},\n  doi = {10.23919/EUSIPCO.2018.8553590},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438437.pdf},\n}\n\n
\n
\n\n\n
\n We perform a first-order perturbation analysis of a root-MUSIC-type method for resolving collisions in the context of blind network-assisted diversity multiple access (BNDMA). Polynomial roots are computed as an intermediate step of the root-MUSIC algorithm for the purpose of blindly identifying the set of transmitters involved in a collision. We derive expressions for the individual and joint distributions of the noise-induced angular shifts of the computed roots. The expressions are analyzed in relation to the signal-to-noise ratio and the number of packet retransmissions made to resolve a collision. Results are verified numerically.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Space Alternating Variational Bayesian Learning for LMMSE Filtering.\n \n \n \n \n\n\n \n Thomas, C. K.; and Slock, D.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1327-1331, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SpacePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553591,\n  author = {C. K. Thomas and D. Slock},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Space Alternating Variational Bayesian Learning for LMMSE Filtering},\n  year = {2018},\n  pages = {1327-1331},\n  abstract = {In this paper, we address the fundamental problem of sparse signal recovery for temporally correlated multiple measurement vectors (MMV) in a Bayesian framework. The temporal correlation of the sparse vector is modeled using a first order autoregressive process. In the case of time varying sparse signals, conventional tracking methods like Kalman filtering fail to exploit the sparsity of the underlying signal. Moreover, the computational complexity associated with sparse Bayesian learning (SBL) renders it infeasible even for moderately large datasets. To address this issue, we utilize variational approximation technique (which allows to obtain analytical approximations to the posterior distributions of interest even when exact inference of these distributions is intractable) to propose a novel fast algorithm called space alternating variational estimation with Kalman filtering (SAVE-KF). Similarly as for SAGE (space-alternating generalized expectation maximization) compared to EM, the component-wise approach of VB appears to allow to avoid a lot of bad local optima, explaining the better performance, apart from lower complexity. Simulation results also show that the proposed algorithm has a faster convergence rate and achieves lower mean square error (MSE) than other state of the art fast SBL methods for temporally correlated measurement vetors.},\n  keywords = {approximation theory;autoregressive processes;Bayes methods;computational complexity;expectation-maximisation algorithm;iterative methods;Kalman filters;mean square error methods;variational techniques;vectors;space alternating variational Bayesian learning;LMMSE;sparse signal recovery;temporally correlated multiple measurement vectors;Bayesian framework;temporal correlation;sparse vector;sparse signals;conventional tracking methods;Kalman filtering;computational complexity;sparse Bayesian learning;variational approximation technique;analytical approximations;posterior distributions;space alternating variational estimation;space-alternating generalized expectation maximization;lower complexity;first order autoregressive process;large datasets;fast SBL methods;Bayes methods;Kalman filters;Estimation;Matching pursuit algorithms;Correlation;Signal processing algorithms;Covariance matrices;Sparse Bayesian Learning;Variational Bayes;Kalman Filtering},\n  doi = {10.23919/EUSIPCO.2018.8553591},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437932.pdf},\n}\n\n
\n
\n\n\n
\n In this paper, we address the fundamental problem of sparse signal recovery for temporally correlated multiple measurement vectors (MMV) in a Bayesian framework. The temporal correlation of the sparse vector is modeled using a first order autoregressive process. In the case of time varying sparse signals, conventional tracking methods like Kalman filtering fail to exploit the sparsity of the underlying signal. Moreover, the computational complexity associated with sparse Bayesian learning (SBL) renders it infeasible even for moderately large datasets. To address this issue, we utilize variational approximation technique (which allows to obtain analytical approximations to the posterior distributions of interest even when exact inference of these distributions is intractable) to propose a novel fast algorithm called space alternating variational estimation with Kalman filtering (SAVE-KF). Similarly as for SAGE (space-alternating generalized expectation maximization) compared to EM, the component-wise approach of VB appears to allow to avoid a lot of bad local optima, explaining the better performance, apart from lower complexity. Simulation results also show that the proposed algorithm has a faster convergence rate and achieves lower mean square error (MSE) than other state of the art fast SBL methods for temporally correlated measurement vetors.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Subspace Classification of Human Gait Using Radar Micro-Doppler Signatures.\n \n \n \n \n\n\n \n Seifert, A. K.; Schäfer, L.; Amin, M. G.; and Zoubir, A. M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 311-315, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"SubspacePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553592,\n  author = {A. K. Seifert and L. Schäfer and M. G. Amin and A. M. Zoubir},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Subspace Classification of Human Gait Using Radar Micro-Doppler Signatures},\n  year = {2018},\n  pages = {311-315},\n  abstract = {Radar-based monitoring of human gait has become of increased interest with applications to security, sports biomechanics, and assisted living. Radar sensing offers contactless monitoring of human gait. It protects privacy and preserves a person's right to anonymity. Considering normal, pathological and assisted gait, we demonstrate the effectiveness of radar in discriminating different walking styles. By use of unsupervised feature extraction methods utilizing principal component analysis, we examine five gait classes using two different joint-variable signal representations, i.e., the spectrogram and the cadence-velocity diagram. Results obtained with experimental K-band radar data show that the choice of signal domain and adequate pre-processing are crucial for achieving high classification rates for all gait classes.},\n  keywords = {biomechanics;Doppler radar;gait analysis;principal component analysis;radar signal processing;signal classification;signal representation;sport;time-frequency analysis;subspace classification;human gait;radar microdoppler signatures;radar-based monitoring;assisted living;radar sensing;contactless monitoring;normal gait;pathological gait;assisted gait;gait classes;experimental K-band radar data;unsupervised feature extraction methods;principal component analysis;Radar;Spectrogram;Legged locomotion;Principal component analysis;Feature extraction;Covariance matrices;Training},\n  doi = {10.23919/EUSIPCO.2018.8553592},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438609.pdf},\n}\n\n
\n
\n\n\n
\n Radar-based monitoring of human gait has become of increased interest with applications to security, sports biomechanics, and assisted living. Radar sensing offers contactless monitoring of human gait. It protects privacy and preserves a person's right to anonymity. Considering normal, pathological and assisted gait, we demonstrate the effectiveness of radar in discriminating different walking styles. By use of unsupervised feature extraction methods utilizing principal component analysis, we examine five gait classes using two different joint-variable signal representations, i.e., the spectrogram and the cadence-velocity diagram. Results obtained with experimental K-band radar data show that the choice of signal domain and adequate pre-processing are crucial for achieving high classification rates for all gait classes.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n A New Proportionate Adaptive Filtering Algorithm with Coefficient Reuse and Robustness Against Impulsive Noise.\n \n \n \n \n\n\n \n Pimenta, R. M. S.; Resende, L. C.; Siqueira, N. N.; Haddad, I. B.; and Petraglia, M. R.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 465-469, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"APaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553593,\n  author = {R. M. S. Pimenta and L. C. Resende and N. N. Siqueira and I. B. Haddad and M. R. Petraglia},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {A New Proportionate Adaptive Filtering Algorithm with Coefficient Reuse and Robustness Against Impulsive Noise},\n  year = {2018},\n  pages = {465-469},\n  abstract = {An adaptive algorithm should ideally present high convergence rate, good steady-state performance, and robustness against impulsive noise. Few algorithms can simultaneously meet these requirements. This paper proposes a local and deterministic optimization problem whose solution gives rise to an adaptive algorithm that presents a higher convergence rate in the identification of sparse systems due to the use of the proportionate adaptation technique. In addition, a correntropy-based cost function is employed in order to enhance its robustness against non-Gaussian noise. Finally, the adoption of coefficient reuse approach results in a good system identification performance in steady-state conditions, especially in low SNR scenarios.},\n  keywords = {adaptive filters;convergence of numerical methods;Gaussian noise;impulse noise;least mean squares methods;optimisation;robust control;impulsive noise;proportionate adaptation technique;correntropy-based cost function;nonGaussian noise;steady-state conditions;local optimization problem;deterministic optimization problem;adaptive filtering algorithm;convergence rate;coefficient reuse approach;sparse systems identification;good system identification;Signal processing algorithms;Convergence;Steady-state;Cost function;Signal to noise ratio;Robustness;Adaptive Filtering;Sparse systems;Proportionate Adaptation;Coefficients Reuse;Maximum Correntropy Criterion},\n  doi = {10.23919/EUSIPCO.2018.8553593},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437412.pdf},\n}\n\n
\n
\n\n\n
\n An adaptive algorithm should ideally present high convergence rate, good steady-state performance, and robustness against impulsive noise. Few algorithms can simultaneously meet these requirements. This paper proposes a local and deterministic optimization problem whose solution gives rise to an adaptive algorithm that presents a higher convergence rate in the identification of sparse systems due to the use of the proportionate adaptation technique. In addition, a correntropy-based cost function is employed in order to enhance its robustness against non-Gaussian noise. Finally, the adoption of coefficient reuse approach results in a good system identification performance in steady-state conditions, especially in low SNR scenarios.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Binarized Convolutional Neural Networks for Efficient Inference on GPUs.\n \n \n \n \n\n\n \n Khan, M.; Huttunen, H.; and Boutellier, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 682-686, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BinarizedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553594,\n  author = {M. Khan and H. Huttunen and J. Boutellier},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Binarized Convolutional Neural Networks for Efficient Inference on GPUs},\n  year = {2018},\n  pages = {682-686},\n  abstract = {Convolutional neural networks have recently achieved significant breakthroughs in various image classification tasks. However, they are computationally expensive, which can make their feasible implementation on embedded and low-power devices difficult. In this paper convolutional neural network binarization is implemented on GPU-based platforms for real-time inference on resource constrained devices. In binarized networks, all weights and intermediate computations between layers are quantized to +1 and -1, allowing multiplications and additions to be replaced with bit-wise operations between 32-bit words. This representation completely eliminates the need for floating point multiplications and additions and decreases both the computational load and the memory footprint compared to a full-precision network implemented in floating point, making it well-suited for resource-constrained environments. We compare the performance of our implementation with an equivalent floating point implementation on one desktop and two embedded GPU platforms. Our implementation achieves a maximum speed up of 7.4× with only 4.4 % loss in accuracy compared to a reference implementation.},\n  keywords = {computer vision;coprocessors;floating point arithmetic;graphics processing units;image classification;neural nets;significant breakthroughs;image classification tasks;feasible implementation;low-power devices;paper convolutional neural network binarization;GPU-based platforms;real-time inference;resource constrained devices;binarized networks;intermediate computations;bit-wise operations;computational load;full-precision network;resource-constrained environments;equivalent floating point implementation;embedded GPU platforms;reference implementation;convolutional neural networks;Graphics processing units;Instruction sets;Kernel;Training;Convolutional neural networks;Europe;model compression;binarized convolutional neural networks;optimization;image classification},\n  doi = {10.23919/EUSIPCO.2018.8553594},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437112.pdf},\n}\n\n
\n
\n\n\n
\n Convolutional neural networks have recently achieved significant breakthroughs in various image classification tasks. However, they are computationally expensive, which can make their feasible implementation on embedded and low-power devices difficult. In this paper convolutional neural network binarization is implemented on GPU-based platforms for real-time inference on resource constrained devices. In binarized networks, all weights and intermediate computations between layers are quantized to +1 and -1, allowing multiplications and additions to be replaced with bit-wise operations between 32-bit words. This representation completely eliminates the need for floating point multiplications and additions and decreases both the computational load and the memory footprint compared to a full-precision network implemented in floating point, making it well-suited for resource-constrained environments. We compare the performance of our implementation with an equivalent floating point implementation on one desktop and two embedded GPU platforms. Our implementation achieves a maximum speed up of 7.4× with only 4.4 % loss in accuracy compared to a reference implementation.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Optimal SWIPT Beamforming for MISO Interfering Broadcast Channels with Multi - Type Receivers.\n \n \n \n\n\n \n Li, Q.; and Lin, J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1277-1281, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553595,\n  author = {Q. Li and J. Lin},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Optimal SWIPT Beamforming for MISO Interfering Broadcast Channels with Multi - Type Receivers},\n  year = {2018},\n  pages = {1277-1281},\n  abstract = {Recently, transmit beamforming for simultaneous wireless information and power transfer (SWIPT) has received considerable attention. Extensive studies have been done on MISO/MIMO SWIPT beamforming for broadcast channels (BCs) and interfering broadcast channels (IBCs). However, for IBCs the optimal SWIPT beamforming solution is in general not available. In this work, we consider SWIPT beamforming for multiuser MISO IBCs with multi-type receives, including pure information receivers (IRs), pure energy receivers (ERs) and simultaneous information and energy receivers. A power minimization problem with SINR and power transfer constraints on the receivers is considered. This problem is shown to be NP-hard in general. In order to get an efficient SWIPT beamforming solution, the energy-signal-aided SWIPT beamforming scheme is employed at the transmission. We show that with the help of the energy signals, the resultant beamforming problem is no longer NP-hard, and can be optimally solved by semidefinite relaxation (SDR). The key to this is to apply a recently developed low-rank solution result on a class of semidefinite programs (SDPs) to pin down the SDR tightness. Simulation results also demonstrate the efficacy of the energy signals in reducing the transmit power.},\n  keywords = {antenna arrays;array signal processing;broadcast channels;computational complexity;concave programming;convex programming;energy harvesting;iterative methods;MIMO communication;multiuser channels;optimisation;radio receivers;radiofrequency interference;MISO/MIMO SWIPT;optimal SWIPT beamforming solution;multiuser MISO IBCs;multitype;pure information receivers;pure energy receivers;simultaneous information;power minimization problem;power transfer constraints;NP-hard;efficient SWIPT beamforming solution;energy-signal-aided SWIPT beamforming scheme;energy signals;resultant beamforming problem;low-rank solution result;transmit power;MISO interfering broadcast channels;multi- type receivers;simultaneous wireless information;Receivers;Array signal processing;Signal to noise ratio;Interference;Erbium;MISO communication;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553595},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Recently, transmit beamforming for simultaneous wireless information and power transfer (SWIPT) has received considerable attention. Extensive studies have been done on MISO/MIMO SWIPT beamforming for broadcast channels (BCs) and interfering broadcast channels (IBCs). However, for IBCs the optimal SWIPT beamforming solution is in general not available. In this work, we consider SWIPT beamforming for multiuser MISO IBCs with multi-type receives, including pure information receivers (IRs), pure energy receivers (ERs) and simultaneous information and energy receivers. A power minimization problem with SINR and power transfer constraints on the receivers is considered. This problem is shown to be NP-hard in general. In order to get an efficient SWIPT beamforming solution, the energy-signal-aided SWIPT beamforming scheme is employed at the transmission. We show that with the help of the energy signals, the resultant beamforming problem is no longer NP-hard, and can be optimally solved by semidefinite relaxation (SDR). The key to this is to apply a recently developed low-rank solution result on a class of semidefinite programs (SDPs) to pin down the SDR tightness. Simulation results also demonstrate the efficacy of the energy signals in reducing the transmit power.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fooling PRNU-Based Detectors Through Convolutional Neural Networks.\n \n \n \n \n\n\n \n Bonettini, N.; Bondi, L.; Güera, D.; Mandelli, S.; Bestagini, P.; Tubaro, S.; and Delp, E. J.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 957-961, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FoolingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553596,\n  author = {N. Bonettini and L. Bondi and D. Güera and S. Mandelli and P. Bestagini and S. Tubaro and E. J. Delp},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fooling PRNU-Based Detectors Through Convolutional Neural Networks},\n  year = {2018},\n  pages = {957-961},\n  abstract = {In the last few years, forensic researchers have developed a wide set of techniques to blindly attribute an image to the device used to shoot it. Among these techniques, those based on photo response non uniformity (PRNU) have shown incredibly accurate results, thus they are often considered as a reference baseline solution. The rationale behind these techniques is that each camera sensor leaves on acquired images a characteristic noise pattern. This pattern can be estimated and uniquely mapped to a specific acquisition device through a cross-correlation test. In this paper, we study the possibility of leveraging recent findings in the deep learning field to attack PRNU-based detectors. Specifically, we focus on the possibility of editing an image through convolutional neural networks in a visually imperceptible way, still hindering PRNU noise estimation. Results show that performing such an attack is possible, even though an informed forensic analyst can reduce its impact through a smart test.},\n  keywords = {cameras;convolution;correlation methods;estimation theory;feedforward neural nets;image coding;image sensors;learning (artificial intelligence);security of data;characteristic noise pattern;cross-correlation test;deep learning field;convolutional neural networks;PRNU noise estimation;forensic researchers;photo response nonuniformity;reference baseline solution;forensic analyst;camera sensor;PRNU-based detectors;Cameras;Cost function;Convolution;Forensics;Correlation;Noise reduction;Signal processing algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553596},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437387.pdf},\n}\n\n
\n
\n\n\n
\n In the last few years, forensic researchers have developed a wide set of techniques to blindly attribute an image to the device used to shoot it. Among these techniques, those based on photo response non uniformity (PRNU) have shown incredibly accurate results, thus they are often considered as a reference baseline solution. The rationale behind these techniques is that each camera sensor leaves on acquired images a characteristic noise pattern. This pattern can be estimated and uniquely mapped to a specific acquisition device through a cross-correlation test. In this paper, we study the possibility of leveraging recent findings in the deep learning field to attack PRNU-based detectors. Specifically, we focus on the possibility of editing an image through convolutional neural networks in a visually imperceptible way, still hindering PRNU noise estimation. Results show that performing such an attack is possible, even though an informed forensic analyst can reduce its impact through a smart test.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Efficient Lossless Compression Algorithm for Electrocardiogram Signals.\n \n \n \n \n\n\n \n Campobello, G.; Segreto, A.; Zanafi, S.; and Serrano, S.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 777-781, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553597,\n  author = {G. Campobello and A. Segreto and S. Zanafi and S. Serrano},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Efficient Lossless Compression Algorithm for Electrocardiogram Signals},\n  year = {2018},\n  pages = {777-781},\n  abstract = {This paper focuses on a novel lossless compression algorithm which can be efficiently used for compression of electrocardiogram (ECG) signals. The proposed algorithm has low memory requirements and relies on a simple and efficient encoding scheme which can be implemented with elementary counting operations. Thus it can be easily implemented even in resource constrained microcontrollers as those commonly used in several low-cost ECG monitoring systems. Despite its simplicity, simulation results carried out on real-world ECG signals show that the proposed algorithm achieves higher compression ratios as even compared to other more complex state-of-the-art solutions.},\n  keywords = {data compression;electrocardiography;encoding;medical signal processing;microcontrollers;patient monitoring;compression ratios;efficient lossless compression algorithm;real-world ECG signals;low-cost ECG monitoring systems;resource constrained microcontrollers;elementary counting operations;efficient encoding scheme;simple encoding scheme;low memory requirements;electrocardiogram signals;Compression algorithms;Signal processing algorithms;Encoding;Electrocardiography;Microcontrollers;Prediction algorithms;Matrix decomposition},\n  doi = {10.23919/EUSIPCO.2018.8553597},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570438085.pdf},\n}\n\n
\n
\n\n\n
\n This paper focuses on a novel lossless compression algorithm which can be efficiently used for compression of electrocardiogram (ECG) signals. The proposed algorithm has low memory requirements and relies on a simple and efficient encoding scheme which can be implemented with elementary counting operations. Thus it can be easily implemented even in resource constrained microcontrollers as those commonly used in several low-cost ECG monitoring systems. Despite its simplicity, simulation results carried out on real-world ECG signals show that the proposed algorithm achieves higher compression ratios as even compared to other more complex state-of-the-art solutions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Explaining Black-box Android Malware Detection.\n \n \n \n \n\n\n \n Melis, M.; Maiorca, D.; Biggio, B.; Giacinto, G.; and Roli, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 524-528, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ExplainingPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553598,\n  author = {M. Melis and D. Maiorca and B. Biggio and G. Giacinto and F. Roli},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Explaining Black-box Android Malware Detection},\n  year = {2018},\n  pages = {524-528},\n  abstract = {Machine-learning models have been recently used for detecting malicious Android applications, reporting impressive performances on benchmark datasets, even when trained only on features statically extracted from the application, such as system calls and permissions. However, recent findings have highlighted the fragility of such in-vitro evaluations with benchmark datasets, showing that very few changes to the content of Android malware may suffice to evade detection. How can we thus trust that a malware detector performing well on benchmark data will continue to do so when deployed in an operating environment? To mitigate this issue, the most popular Android malware detectors use linear, explainable machine-learning models to easily identify the most influential features contributing to each decision. In this work, we generalize this approach to any black-box machine-learning model, by leveraging a gradient-based approach to identify the most influential local features. This enables using nonlinear models to potentially increase accuracy without sacrificing interpretability of decisions. Our approach also highlights the global characteristics learned by the model to discriminate between benign and malware applications. Finally, as shown by our empirical analysis on a popular Android malware detection task, it also helps identifying potential vulnerabilities of linear and nonlinear models against adversarial manipulations.},\n  keywords = {Android (operating system);invasive software;learning (artificial intelligence);black-box Android malware detection;malicious Android applications;benchmark datasets;system calls;in-vitro evaluations;malware detector;benchmark data;popular Android malware detectors;linear machine-learning models;explainable machine-learning models;influential features;black-box machine-learning model;gradient-based approach;influential local features;nonlinear models;benign applications;malware applications;Android malware detection task;Malware;Feature extraction;Detectors;Machine learning;Support vector machines;Signal processing algorithms;Approximation algorithms},\n  doi = {10.23919/EUSIPCO.2018.8553598},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439445.pdf},\n}\n\n
\n
\n\n\n
\n Machine-learning models have been recently used for detecting malicious Android applications, reporting impressive performances on benchmark datasets, even when trained only on features statically extracted from the application, such as system calls and permissions. However, recent findings have highlighted the fragility of such in-vitro evaluations with benchmark datasets, showing that very few changes to the content of Android malware may suffice to evade detection. How can we thus trust that a malware detector performing well on benchmark data will continue to do so when deployed in an operating environment? To mitigate this issue, the most popular Android malware detectors use linear, explainable machine-learning models to easily identify the most influential features contributing to each decision. In this work, we generalize this approach to any black-box machine-learning model, by leveraging a gradient-based approach to identify the most influential local features. This enables using nonlinear models to potentially increase accuracy without sacrificing interpretability of decisions. Our approach also highlights the global characteristics learned by the model to discriminate between benign and malware applications. Finally, as shown by our empirical analysis on a popular Android malware detection task, it also helps identifying potential vulnerabilities of linear and nonlinear models against adversarial manipulations.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n An Overview of Recent Advances in Assessing and Mitigating the Face Morphing Attack.\n \n \n \n \n\n\n \n Makrushin, A.; and Wolf, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1017-1021, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553599,\n  author = {A. Makrushin and A. Wolf},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {An Overview of Recent Advances in Assessing and Mitigating the Face Morphing Attack},\n  year = {2018},\n  pages = {1017-1021},\n  abstract = {The face morphing attack enables the illegitimate sharing of photo-ID documents intended for identity verification. Multiple users may use the same passport, driver license or health insurance card without being condemned. This paper summarizes recent advances in protecting the photo-ID-based verification from the morphing attack. We explain the attack along with the standard approach of creating morphed face images. We identify research gaps and open challenges by summarizing studies assessing the potential of the morphing attack as well as studies concerned with generating databases of morphed face images and examining the performance of morphing detectors. We discuss new performance metrics looking for conformity with the standard on presentation attack detection. Based on the current advances, we recommend technical and organizational security mechanisms to mitigate or even prevent the morphing attack.},\n  keywords = {computer crime;face recognition;health care;image morphing;security of data;smart cards;morphed face images;morphing detectors;presentation attack detection;face morphing attack;photo-ID documents;identity verification;driver license;photo-ID-based verification;Face;Splicing;Databases;Security;Visualization;Europe;Standards;face morphing attack;morphing detection},\n  doi = {10.23919/EUSIPCO.2018.8553599},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437411.pdf},\n}\n\n
\n
\n\n\n
\n The face morphing attack enables the illegitimate sharing of photo-ID documents intended for identity verification. Multiple users may use the same passport, driver license or health insurance card without being condemned. This paper summarizes recent advances in protecting the photo-ID-based verification from the morphing attack. We explain the attack along with the standard approach of creating morphed face images. We identify research gaps and open challenges by summarizing studies assessing the potential of the morphing attack as well as studies concerned with generating databases of morphed face images and examining the performance of morphing detectors. We discuss new performance metrics looking for conformity with the standard on presentation attack detection. Based on the current advances, we recommend technical and organizational security mechanisms to mitigate or even prevent the morphing attack.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Automatic Chord Recognition with Higher-Order Harmonic Language Modelling.\n \n \n \n \n\n\n \n Korzeniowski, F.; and Widnaer, G.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1900-1904, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AutomaticPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553600,\n  author = {F. Korzeniowski and G. Widnaer},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Automatic Chord Recognition with Higher-Order Harmonic Language Modelling},\n  year = {2018},\n  pages = {1900-1904},\n  abstract = {Common temporal models for automatic chord recognition model chord changes on a frame-wise basis. Due to this fact, they are unable to capture musical knowledge about chord progressions. In this paper, we propose a temporal model that enables explicit modelling of chord changes and durations. We then apply N -gram models and a neural-network-based acoustic model within this framework, and evaluate the effect of model overconfidence. Our results show that model overconfidence plays only a minor role (but target smoothing still improves the acoustic model), and that stronger chord language models do improve recognition results, however their effects are small compared to other domains.},\n  keywords = {acoustic signal processing;music;neural nets;neural-network-based acoustic model;higher-order harmonic language modelling;automatic chord recognition model;frame-wise basis;musical knowledge;chord progressions;temporal model;explicit modelling;N-gram models;chord language models;Hidden Markov models;Computational modeling;Acoustics;Smoothing methods;Training;Europe;Predictive models;Chord Recognition;Language Modelling;N-Grams;Neural Networks},\n  doi = {10.23919/EUSIPCO.2018.8553600},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570432212.pdf},\n}\n\n
\n
\n\n\n
\n Common temporal models for automatic chord recognition model chord changes on a frame-wise basis. Due to this fact, they are unable to capture musical knowledge about chord progressions. In this paper, we propose a temporal model that enables explicit modelling of chord changes and durations. We then apply N -gram models and a neural-network-based acoustic model within this framework, and evaluate the effect of model overconfidence. Our results show that model overconfidence plays only a minor role (but target smoothing still improves the acoustic model), and that stronger chord language models do improve recognition results, however their effects are small compared to other domains.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Localization of Near-Field Signals Based on Linear Prediction and Oblique Projection Operator.\n \n \n \n \n\n\n \n Liu, W.; Zuo, W.; Xin, J.; Zheng, N.; and Sano, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 341-345, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LocalizationPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553601,\n  author = {W. Liu and W. Zuo and J. Xin and N. Zheng and A. Sano},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Localization of Near-Field Signals Based on Linear Prediction and Oblique Projection Operator},\n  year = {2018},\n  pages = {341-345},\n  abstract = {Recently many subspace-based localization methods were developed for estimating the directions of arrivals (DOAs) and ranges of multiple narrowband signals in near-field. However, most of them usually encounter “saturation behavior” in estimation performance regardless of the signal-to-noise ratio (SNR) when the number of array snapshots is not sufficiently large enough. In this paper, we investigate the problem of localizing multiple narrowband near-field signals impinging on a symmetrical uniform linear array (ULA). Firstly, by exploiting the anti-diagonal elements of the array covariance matrix, a new linear prediction approach with truncated singular value decomposition (SVD) is proposed to estimate the location parameters (i.e., DOA and range) of the incident signals. Secondly, as a measure against the impact of finite array data, an alternating iterative scheme is presented to improve the estimation accuracy of the location parameters, where the “saturation behavior” encountered in most of localization methods is solved effectively. Furthermore, the statistical analysis of the proposed method is studied, and the asymptotic mean-squared-error (MSE) expressions of the estimation errors are derived for two location parameters. Finally, the effectiveness and the theoretical analysis are substantiated through numerical examples.},\n  keywords = {array signal processing;covariance matrices;direction-of-arrival estimation;iterative methods;mean square error methods;singular value decomposition;statistical analysis;oblique projection operator;subspace-based localization methods;DOA;multiple narrowband signals;saturation behavior;estimation performance;signal-to-noise ratio;array snapshots;multiple narrowband near-field signals;symmetrical uniform linear array;anti-diagonal elements;array covariance matrix;linear prediction approach;truncated singular value decomposition;location parameters;incident signals;finite array data;alternating iterative scheme;estimation accuracy;estimation errors;Arrays;Direction-of-arrival estimation;Signal to noise ratio;Covariance matrices;Narrowband;Estimation;Sensors},\n  doi = {10.23919/EUSIPCO.2018.8553601},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437077.pdf},\n}\n\n
\n
\n\n\n
\n Recently many subspace-based localization methods were developed for estimating the directions of arrivals (DOAs) and ranges of multiple narrowband signals in near-field. However, most of them usually encounter “saturation behavior” in estimation performance regardless of the signal-to-noise ratio (SNR) when the number of array snapshots is not sufficiently large enough. In this paper, we investigate the problem of localizing multiple narrowband near-field signals impinging on a symmetrical uniform linear array (ULA). Firstly, by exploiting the anti-diagonal elements of the array covariance matrix, a new linear prediction approach with truncated singular value decomposition (SVD) is proposed to estimate the location parameters (i.e., DOA and range) of the incident signals. Secondly, as a measure against the impact of finite array data, an alternating iterative scheme is presented to improve the estimation accuracy of the location parameters, where the “saturation behavior” encountered in most of localization methods is solved effectively. Furthermore, the statistical analysis of the proposed method is studied, and the asymptotic mean-squared-error (MSE) expressions of the estimation errors are derived for two location parameters. Finally, the effectiveness and the theoretical analysis are substantiated through numerical examples.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multi-Resolution Coded Apertures Based on Side Information for Single Pixel Spectral Reconstruction.\n \n \n \n \n\n\n \n Garcia, H.; Correa, C. V.; Sánchez, K.; Vargas, E.; and Arguello, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2215-2219, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Multi-ResolutionPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553602,\n  author = {H. Garcia and C. V. Correa and K. Sánchez and E. Vargas and H. Arguello},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multi-Resolution Coded Apertures Based on Side Information for Single Pixel Spectral Reconstruction},\n  year = {2018},\n  pages = {2215-2219},\n  abstract = {Compressive spectral imaging (CSI) architectures allow to reconstruct spectral images from a lower number of measures than the traditional scanning-based methods. In these architectures, the coded aperture design is critical to obtain high-quality reconstructions. The structure of coded apertures is traditionally designed without information about the scene, but recently side information-based architectures provide prior information of the scene, which enables adaptive coded aperture designs. This work proposes the development of an adaptive coded aperture design for spectral imaging with the single pixel camera, based on a multi-resolution approach. An RGB side image is used to define blocks of similar pixels, such that they can be used to design the coded aperture patterns. This approach improves the reconstruction quality in up to 23dB compared with traditional single pixel camera, and the computation time in up to 99.5% because it does not require an iterative algorithm.},\n  keywords = {adaptive codes;cameras;compressed sensing;image coding;image colour analysis;image reconstruction;image resolution;single pixel camera;scanning-based methods;adaptive coded aperture design;spectral images;compressive spectral imaging architectures;single pixel spectral reconstruction;multiresolution coded apertures;RGB side image;multiresolution approach;information-based architectures;coded aperture design;Apertures;Image reconstruction;Cameras;Detectors;Computer architecture},\n  doi = {10.23919/EUSIPCO.2018.8553602},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437460.pdf},\n}\n\n
\n
\n\n\n
\n Compressive spectral imaging (CSI) architectures allow to reconstruct spectral images from a lower number of measures than the traditional scanning-based methods. In these architectures, the coded aperture design is critical to obtain high-quality reconstructions. The structure of coded apertures is traditionally designed without information about the scene, but recently side information-based architectures provide prior information of the scene, which enables adaptive coded aperture designs. This work proposes the development of an adaptive coded aperture design for spectral imaging with the single pixel camera, based on a multi-resolution approach. An RGB side image is used to define blocks of similar pixels, such that they can be used to design the coded aperture patterns. This approach improves the reconstruction quality in up to 23dB compared with traditional single pixel camera, and the computation time in up to 99.5% because it does not require an iterative algorithm.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the Angular Resolution Limit Uncertainty.\n \n \n \n \n\n\n \n Greco, M. S.; Boyer, R.; and Nielsen, F.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 623-626, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553603,\n  author = {M. S. Greco and R. Boyer and F. Nielsen},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On the Angular Resolution Limit Uncertainty},\n  year = {2018},\n  pages = {623-626},\n  abstract = {The Angular Resolution Limit (ARL), denoted by 6, is a key statistical quantity to measure our ability to resolve two closely-spaced narrowband far-field complex sources. In the literature, the ARL, denoted by δ0, is systematically assumed to be perfectly known for mathematical convenience. In this work, our knowledge on the ARL is supposed to be only partial, meaning that δ ~N (δ0, σδ2). The degree of uncertainty is quantified by the ratio ξ = δ02/σδ2. Based on the Chernoff Upper Bound (CUB) on the minimal error probability, we show that the CUB is highly dependent on the degree of uncertainty, ξ. As by-product, the optimal s-value for which the CUB is the tightest upper bound is analytically studied.},\n  keywords = {error statistics;statistical analysis;optimal s-value;Chernoff Upper Bound;mathematical convenience;far-field complex sources;key statistical quantity;ARL;Angular Resolution Limit uncertainty;minimal error probability;CUB;Uncertainty;Signal resolution;Signal to noise ratio;Error probability;Random variables;Europe;Angular Resolution Limit;model of uncertainty;upper bound on the error probability},\n  doi = {10.23919/EUSIPCO.2018.8553603},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435446.pdf},\n}\n\n
\n
\n\n\n
\n The Angular Resolution Limit (ARL), denoted by 6, is a key statistical quantity to measure our ability to resolve two closely-spaced narrowband far-field complex sources. In the literature, the ARL, denoted by δ0, is systematically assumed to be perfectly known for mathematical convenience. In this work, our knowledge on the ARL is supposed to be only partial, meaning that δ  N (δ0, σδ2). The degree of uncertainty is quantified by the ratio ξ = δ02/σδ2. Based on the Chernoff Upper Bound (CUB) on the minimal error probability, we show that the CUB is highly dependent on the degree of uncertainty, ξ. As by-product, the optimal s-value for which the CUB is the tightest upper bound is analytically studied.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Blind Spectrum Sensing Based on Recurrence Quantification Analysis in the Context of Cognitive Radio.\n \n \n \n \n\n\n \n Kadjo, J. -.; Yao, K. C.; and Mansour, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1835-1839, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"BlindPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553604,\n  author = {J. -. Kadjo and K. C. Yao and A. Mansour},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Blind Spectrum Sensing Based on Recurrence Quantification Analysis in the Context of Cognitive Radio},\n  year = {2018},\n  pages = {1835-1839},\n  abstract = {In Cognitive Radio, spectrum sensing methods can be classified in three categories: temporal, frequential and hybrid (temporal and frequential) methods. Temporal methods require a long observation period; frequential and hybrid methods have a high calculation cost and they are very sensitive to frequency resolution. In very low signal-to-noise ratio (SNR) and non-cooperative conditions, spectrum sensing methods present some limitations. To overcome these shortcomings, we propose a new blind strategy to detect the unoccupied spectral bands during a very short observation period. This new strategy is a temporal method based on Recurrence Quantification Analysis (RQA) of the received signal. Since the recurrence level in a communication signal is different from that of White Gaussian Noise, the detector can evaluate the recurrence level of the observed signal to detect the presence of a communication signal over a given spectral bandwidth. First, we estimate the three fundamental parameters of the recurrence matrix: the time delay parameter, the embedding dimension and the recurrence threshold. With these parameters, during a detection stage, the detector evaluates the recurrence level through the recurrence rate and compare it to a predetermined threshold estimated in absence of the signal of interest. The spectrum sensing based on RQA is very fast, free of frequency resolution issue and able to distinguish communication signal from a White Gaussian Noise. The results of our simulations prove the robustness of proposed RQA detector acting over limited number of samples and under very low SNR conditions.},\n  keywords = {cognitive radio;Gaussian noise;radio spectrum management;signal detection;RQA detector;low SNR conditions;recurrence rate;recurrence threshold;recurrence matrix;observed signal;White Gaussian Noise;communication signal;recurrence level;received signal;short observation period;unoccupied spectral bands;blind strategy;low signal-to-noise ratio;high calculation cost;hybrid methods;frequential methods;long observation period;temporal method;spectrum sensing methods;Cognitive Radio;Recurrence Quantification Analysis;blind spectrum sensing;Mathematical model;Delay effects;Tools;Signal to noise ratio;Detectors;Random variables;Cognitive Radio;Spectrum Sensing;Recurrence Quantification Analysis;Embedding parameters;Mutual Information;False Nearest Neighbours},\n  doi = {10.23919/EUSIPCO.2018.8553604},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437830.pdf},\n}\n\n
\n
\n\n\n
\n In Cognitive Radio, spectrum sensing methods can be classified in three categories: temporal, frequential and hybrid (temporal and frequential) methods. Temporal methods require a long observation period; frequential and hybrid methods have a high calculation cost and they are very sensitive to frequency resolution. In very low signal-to-noise ratio (SNR) and non-cooperative conditions, spectrum sensing methods present some limitations. To overcome these shortcomings, we propose a new blind strategy to detect the unoccupied spectral bands during a very short observation period. This new strategy is a temporal method based on Recurrence Quantification Analysis (RQA) of the received signal. Since the recurrence level in a communication signal is different from that of White Gaussian Noise, the detector can evaluate the recurrence level of the observed signal to detect the presence of a communication signal over a given spectral bandwidth. First, we estimate the three fundamental parameters of the recurrence matrix: the time delay parameter, the embedding dimension and the recurrence threshold. With these parameters, during a detection stage, the detector evaluates the recurrence level through the recurrence rate and compare it to a predetermined threshold estimated in absence of the signal of interest. The spectrum sensing based on RQA is very fast, free of frequency resolution issue and able to distinguish communication signal from a White Gaussian Noise. The results of our simulations prove the robustness of proposed RQA detector acting over limited number of samples and under very low SNR conditions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Feature Fusion via Tensor Network Summation.\n \n \n \n \n\n\n \n Calvi, G. G.; Kisil, I.; and Mandic, D. P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2623-2627, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FeaturePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553605,\n  author = {G. G. Calvi and I. Kisil and D. P. Mandic},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Feature Fusion via Tensor Network Summation},\n  year = {2018},\n  pages = {2623-2627},\n  abstract = {Tensor networks (TNs) have been earning considerable attention as multiway data analysis tools owing to their ability to tackle the curse of dimensionality through the representation of large-scale tensors via smaller-scale interconnections of their intrinsic features. However, despite the obvious benefits, the current treatment of TNs as stand-alone entities does not take full advantage of their underlying structure and the associated feature localization. To this end, we exploit the analogy with feature fusion to propose a rigorous framework for the combination of TNs, with a particular focus on their summation as a natural way of their combination. The proposed framework is shown to allow for feature combination of any number of tensors, as long as their TN representation topologies are isomorphic. Simulations involving multi-class classification of an image dataset show the benefits of the proposed framework.},\n  keywords = {data analysis;image classification;image fusion;image representation;tensors;multiway data analysis tools;large-scale tensors;multiclass classification;image dataset;tensor network summation;TN representation topologies;feature combination;feature fusion;associated feature localization;stand-alone entities;intrinsic features;Tensile stress;Topology;Feature extraction;Matrix decomposition;Signal processing;Europe;Tools;Sum of tensor networks;Tucker decomposition;classification;feature fusion;graphs},\n  doi = {10.23919/EUSIPCO.2018.8553605},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434763.pdf},\n}\n\n
\n
\n\n\n
\n Tensor networks (TNs) have been earning considerable attention as multiway data analysis tools owing to their ability to tackle the curse of dimensionality through the representation of large-scale tensors via smaller-scale interconnections of their intrinsic features. However, despite the obvious benefits, the current treatment of TNs as stand-alone entities does not take full advantage of their underlying structure and the associated feature localization. To this end, we exploit the analogy with feature fusion to propose a rigorous framework for the combination of TNs, with a particular focus on their summation as a natural way of their combination. The proposed framework is shown to allow for feature combination of any number of tensors, as long as their TN representation topologies are isomorphic. Simulations involving multi-class classification of an image dataset show the benefits of the proposed framework.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n On the detection of low rank matrices in the high-dimensional regime.\n \n \n \n \n\n\n \n Chevreuil, A.; and Loubaton, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1102-1106, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"OnPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553606,\n  author = {A. Chevreuil and P. Loubaton},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {On the detection of low rank matrices in the high-dimensional regime},\n  year = {2018},\n  pages = {1102-1106},\n  abstract = {We address the detection of a low rank n×n matrix X0 from the noisy observation X0+Z when n → ∞, where Z is a complex Gaussian random matrix with independent identically distributed ℕc (0, [1/n]) entries. Thanks to large random matrix theory results, it is now well-known that if the largest singular value λ1(X0) of X0 verifies λ1(X0) > 1, then it is possible to exhibit consistent tests. In this contribution, we prove a contrario that under the condition λ1(X0) <; 1, there are no consistent tests. Our proof is inspired by previous works devoted to the case of rank 1 matrices X0.},\n  keywords = {matrix algebra;random processes;low rank matrices;high-dimensional regime;consistent tests;singular value;gaussian random matrix;noisy observation;Tensile stress;Upper bound;Europe;Signal processing;Random variables;Matrix decomposition;Noise measurement;statistical detection tests;large random matrices;large deviation principle},\n  doi = {10.23919/EUSIPCO.2018.8553606},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437336.pdf},\n}\n\n
\n
\n\n\n
\n We address the detection of a low rank n×n matrix X0 from the noisy observation X0+Z when n → ∞, where Z is a complex Gaussian random matrix with independent identically distributed ℕc (0, [1/n]) entries. Thanks to large random matrix theory results, it is now well-known that if the largest singular value λ1(X0) of X0 verifies λ1(X0) > 1, then it is possible to exhibit consistent tests. In this contribution, we prove a contrario that under the condition λ1(X0) <; 1, there are no consistent tests. Our proof is inspired by previous works devoted to the case of rank 1 matrices X0.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Scalable Light Field Coding with Support for Region of Interest Enhancement.\n \n \n \n \n\n\n \n Conti, C.; Soares, L. D.; and Nunes, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1855-1859, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"ScalablePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553608,\n  author = {C. Conti and L. D. Soares and P. Nunes},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Scalable Light Field Coding with Support for Region of Interest Enhancement},\n  year = {2018},\n  pages = {1855-1859},\n  abstract = {Light field imaging based on microlens arrays - a.k.a. holoscopic, plenoptic, and integral imaging - has currently risen up as a feasible and prospective technology for future image and video applications. However, deploying actual light field applications will require identifying more powerful representation and coding solutions that support emerging manipulation and interaction functionalities. In this context, this paper proposes a novel scalable coding approach that supports a new type of scalability, referred to as Field of View (FOV) scalability, in which enhancement layers can correspond to regions of interest (ROI). The proposed scalable coding approach comprises a base layer compliant with the High Efficiency Video Coding (HEVC) standard, complemented by one or more enhancement layers that progressively allow richer versions of the same light field content in terms of content manipulation and interaction possibilities, for the whole scene or just for a given ROI. Experimental results show the advantages of the proposed scalable coding approach with ROI support to cater for users with different preferences/requirements in terms of interaction functionalities.},\n  keywords = {data compression;image enhancement;microlenses;video coding;HEVC standard;field of view scalability;plenoptic imaging;holoscopic imaging;light field imaging;region of interest enhancement;microlens arrays;high efficiency video coding standard;actual light field applications;video applications;integral imaging;scalable light field coding;ROI support;interaction possibilities;light field content;base layer compliant;enhancement layers;Lenses;Scalability;Image coding;Microoptics;Encoding;Cameras;light field;field of view scalability;region of interest;image compression;HEVC},\n  doi = {10.23919/EUSIPCO.2018.8553608},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439300.pdf},\n}\n\n
\n
\n\n\n
\n Light field imaging based on microlens arrays - a.k.a. holoscopic, plenoptic, and integral imaging - has currently risen up as a feasible and prospective technology for future image and video applications. However, deploying actual light field applications will require identifying more powerful representation and coding solutions that support emerging manipulation and interaction functionalities. In this context, this paper proposes a novel scalable coding approach that supports a new type of scalability, referred to as Field of View (FOV) scalability, in which enhancement layers can correspond to regions of interest (ROI). The proposed scalable coding approach comprises a base layer compliant with the High Efficiency Video Coding (HEVC) standard, complemented by one or more enhancement layers that progressively allow richer versions of the same light field content in terms of content manipulation and interaction possibilities, for the whole scene or just for a given ROI. Experimental results show the advantages of the proposed scalable coding approach with ROI support to cater for users with different preferences/requirements in terms of interaction functionalities.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n Accelerated stochastic multiplicative update with gradient averaging for nonnegative matrix factorizations.\n \n \n \n\n\n \n Kasai, H.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2593-2597, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553610,\n  author = {H. Kasai},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Accelerated stochastic multiplicative update with gradient averaging for nonnegative matrix factorizations},\n  year = {2018},\n  pages = {2593-2597},\n  abstract = {Nonnegative matrix factorization (NMF) is a powerful tool in data analysis by discovering latent features and part-based patterns from high-dimensional data, and is a special case in which factor matrices have low-rank nonnegative constraints. Applying NMF into huge-size matrices, we specifically address stochastic multiplicative update (MU) rule, which is the most popular, but which has slow convergence property. This present paper introduces a gradient averaging technique of stochastic gradient on the stochastic MU rule, and proposes an accelerated stochastic multiplicative update rule: SAGMU. Extensive computational experiments using both synthetic and real-world datasets demonstrate the effectiveness of SAGMU.},\n  keywords = {matrix decomposition;stochastic processes;nonnegative matrix factorization;NMF;data analysis;latent features;part-based patterns;high-dimensional data;factor matrices;low-rank nonnegative constraints;huge-size matrices;gradient averaging technique;stochastic gradient;stochastic MU rule;accelerated stochastic multiplicative update rule;Signal processing algorithms;Convergence;Acceleration;Europe;Signal processing;Optimization;Machine learning algorithms;nonnegative matrix factorization;multiplicative update;stochastic gradient;gradient averaging},\n  doi = {10.23919/EUSIPCO.2018.8553610},\n  issn = {2076-1465},\n  month = {Sep.},\n}\n\n
\n
\n\n\n
\n Nonnegative matrix factorization (NMF) is a powerful tool in data analysis by discovering latent features and part-based patterns from high-dimensional data, and is a special case in which factor matrices have low-rank nonnegative constraints. Applying NMF into huge-size matrices, we specifically address stochastic multiplicative update (MU) rule, which is the most popular, but which has slow convergence property. This present paper introduces a gradient averaging technique of stochastic gradient on the stochastic MU rule, and proposes an accelerated stochastic multiplicative update rule: SAGMU. Extensive computational experiments using both synthetic and real-world datasets demonstrate the effectiveness of SAGMU.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Lossy Audio Compression Identification.\n \n \n \n \n\n\n \n Kim, B.; and Rafii, Z.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2459-2463, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"LossyPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553611,\n  author = {B. Kim and Z. Rafii},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Lossy Audio Compression Identification},\n  year = {2018},\n  pages = {2459-2463},\n  abstract = {We propose a system which can estimate from an audio recording that has previously undergone lossy compression the parameters used for the encoding, and therefore identify the corresponding lossy coding format. The system analyzes the audio signal and searches for the compression parameters and framing conditions which match those used for the encoding. In particular, we propose a new metric for measuring traces of compression which is robust to variations in the audio content and a new method for combining the estimates from multiple audio blocks which can refine the results. We evaluated this system with audio excerpts from songs and movies, compressed into various coding formats, using different bit rates, and captured digitally as well as through analog transfer. Results showed that our system can identify the correct format in almost all cases, even at high bit rates and with distorted audio, with an overall accuracy of 0.96.},\n  keywords = {audio coding;audio recording;audio signal processing;data compression;encoding;audio signal;framing conditions;audio content;multiple audio blocks;audio excerpts;coding formats;distorted audio;lossy audio compression identification;audio recording;lossy coding format;Microsoft Windows;Time-frequency analysis;Digital audio players;Bit rate;Audio compression;Audio coding;lossy compression;audio coding format},\n  doi = {10.23919/EUSIPCO.2018.8553611},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436395.pdf},\n}\n\n
\n
\n\n\n
\n We propose a system which can estimate from an audio recording that has previously undergone lossy compression the parameters used for the encoding, and therefore identify the corresponding lossy coding format. The system analyzes the audio signal and searches for the compression parameters and framing conditions which match those used for the encoding. In particular, we propose a new metric for measuring traces of compression which is robust to variations in the audio content and a new method for combining the estimates from multiple audio blocks which can refine the results. We evaluated this system with audio excerpts from songs and movies, compressed into various coding formats, using different bit rates, and captured digitally as well as through analog transfer. Results showed that our system can identify the correct format in almost all cases, even at high bit rates and with distorted audio, with an overall accuracy of 0.96.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Real-Time Quality Assessment of Videos from Body-Worn Cameras.\n \n \n \n \n\n\n \n Chang, Y.; Mazzon, R.; and Cavallaro, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2160-2164, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"Real-TimePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553612,\n  author = {Y. Chang and R. Mazzon and A. Cavallaro},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Real-Time Quality Assessment of Videos from Body-Worn Cameras},\n  year = {2018},\n  pages = {2160-2164},\n  abstract = {Videos captured with body-worn cameras may be affected by distortions such as motion blur, overexposure and reduced contrast. Automated video quality assessment is therefore important prior to auto-tagging, event or object recognition, or automated editing. In this paper, we present M-BRISQUE, a spatial quality evaluator that combines, in realtime, the Michelson contrast with features from the Blind/Referenceless Image Spatial QUality Evaluator. To link the resulting quality score to human judgement, we train a Support Vector Regressor with Radial Basis Function kernel on the Computational and Subjective Image Quality database. We show an example of application of M-BRISQUE in automatic editing of multi-camera content using relative view quality, and validate its predictive performance with a subjective evaluation and two public datasets.},\n  keywords = {cameras;image capture;image motion analysis;image restoration;interference (signal);object recognition;radial basis function networks;regression analysis;support vector machines;video signal processing;Radial Basis Function;multicamera content;event recognition;object recognition;auto-tagging;support vector regressor;computational image quality database;subjective image quality database;Blind/Referenceless Image Spatial QUality Evaluator;Michelson contrast;M-BRISQUE;automated editing;automated video quality assessment;body-worn cameras;Videos;Distortion;Cameras;Quality assessment;Databases;Real-time systems;Body-worn cameras;video quality;real-time processing},\n  doi = {10.23919/EUSIPCO.2018.8553612},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570437924.pdf},\n}\n\n
\n
\n\n\n
\n Videos captured with body-worn cameras may be affected by distortions such as motion blur, overexposure and reduced contrast. Automated video quality assessment is therefore important prior to auto-tagging, event or object recognition, or automated editing. In this paper, we present M-BRISQUE, a spatial quality evaluator that combines, in realtime, the Michelson contrast with features from the Blind/Referenceless Image Spatial QUality Evaluator. To link the resulting quality score to human judgement, we train a Support Vector Regressor with Radial Basis Function kernel on the Computational and Subjective Image Quality database. We show an example of application of M-BRISQUE in automatic editing of multi-camera content using relative view quality, and validate its predictive performance with a subjective evaluation and two public datasets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Joint Long-Term Admission Control and Beamforming in Downlink MISO Networks.\n \n \n \n \n\n\n \n Lin, J.; Li, Q.; and Ma, M.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 937-941, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"JointPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553613,\n  author = {J. Lin and Q. Li and M. Ma},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Joint Long-Term Admission Control and Beamforming in Downlink MISO Networks},\n  year = {2018},\n  pages = {937-941},\n  abstract = {Admission control has been widely utilized to alleviate network congestion. However, most current studies choose the admissible users based on instantaneous channel information. Due to the time-varying characteristics of wireless channels, the admissible user set changes quickly, which complicates network management and renders heavy operational costs. This motivates us to take the stability of the admissible user set into account in admission control, thus leading to a long-term admission control problem. In this paper, we consider a joint long-term admission control and beamforming problem in a downlink network consisting of one multi-antenna base station (BS) and multiple single-antenna users. To maintain a relatively stable admissible user set and minimize the power cost for the admissible users to achieve their quality-of-service levels, we jointly optimize the admissible users, the BS transmit beamformers, and the switching frequency of each user's admissible status in a given time period. To handle this challenging non-convex problem, we first design a sequential convex approximation (SCA) algorithm to iteratively compute a stationary solution. To facilitate algorithm's implementation, we further employ the alternating direction method of multipliers to come up with an efficient, semi-closed-form update of each SCA problem.},\n  keywords = {antenna arrays;approximation theory;array signal processing;concave programming;convex programming;iterative methods;MISO communication;quality of service;sequential estimation;stability;telecommunication congestion control;joint long-term admission control;beamforming problem;downlink MISO networks;network congestion;instantaneous channel information;time-varying characteristics;wireless channels;network management;admissible user stability;multiantenna base station;multiantenna BS;multiple single-antenna users;quality-of-service;nonconvex problem;sequential convex approximation algorithm;SCA algorithm;alternating direction method of multipliers;Admission control;Signal processing algorithms;Approximation algorithms;Array signal processing;Europe;Multi-input single-output;long-term admission control;beamforming;sequential convex approximation;alternating direction method of multipliers},\n  doi = {10.23919/EUSIPCO.2018.8553613},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570434305.pdf},\n}\n\n
\n
\n\n\n
\n Admission control has been widely utilized to alleviate network congestion. However, most current studies choose the admissible users based on instantaneous channel information. Due to the time-varying characteristics of wireless channels, the admissible user set changes quickly, which complicates network management and renders heavy operational costs. This motivates us to take the stability of the admissible user set into account in admission control, thus leading to a long-term admission control problem. In this paper, we consider a joint long-term admission control and beamforming problem in a downlink network consisting of one multi-antenna base station (BS) and multiple single-antenna users. To maintain a relatively stable admissible user set and minimize the power cost for the admissible users to achieve their quality-of-service levels, we jointly optimize the admissible users, the BS transmit beamformers, and the switching frequency of each user's admissible status in a given time period. To handle this challenging non-convex problem, we first design a sequential convex approximation (SCA) algorithm to iteratively compute a stationary solution. To facilitate algorithm's implementation, we further employ the alternating direction method of multipliers to come up with an efficient, semi-closed-form update of each SCA problem.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Fast Hyperspectral Cube Reconstruction for a Double Disperser Imager.\n \n \n \n \n\n\n \n Ardi, I.; Carfantan, H.; Lacroix, S.; and Monmayrant, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2225-2229, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"FastPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553615,\n  author = {I. Ardi and H. Carfantan and S. Lacroix and A. Monmayrant},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Fast Hyperspectral Cube Reconstruction for a Double Disperser Imager},\n  year = {2018},\n  pages = {2225-2229},\n  abstract = {We consider the problem of hyperspectral cube reconstruction with a new controllable imaging system. The reconstruction with a small number of images acquired with different configurations of the imager avoids a complete scanning of the hyperspectral cube. We focus here on a quadratic penalty reconstruction approach, which provides a fast resolution thanks to the high sparsity of the involved matrices. While such a regularization is known to smooth the restored images, we propose to exploit the system capability to acquire the panchromatic image of the scene, to introduce prior information on the sharp edges of the image, leading to a fast and edge-preserved reconstruction of the image.},\n  keywords = {hyperspectral imaging;image reconstruction;image resolution;image restoration;image scanners;matrix algebra;image scanning;image restoration;image resolution;matrix algebra;panchromatic image acquisition;edge-preserved image reconstruction;quadratic penalty reconstruction approach;controllable imaging system;double disperser imager;fast hyperspectral cube reconstruction;Image reconstruction;Charge coupled devices;Image edge detection;Hyperspectral imaging;Optical imaging;Mathematical model},\n  doi = {10.23919/EUSIPCO.2018.8553615},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436359.pdf},\n}\n\n
\n
\n\n\n
\n We consider the problem of hyperspectral cube reconstruction with a new controllable imaging system. The reconstruction with a small number of images acquired with different configurations of the imager avoids a complete scanning of the hyperspectral cube. We focus here on a quadratic penalty reconstruction approach, which provides a fast resolution thanks to the high sparsity of the involved matrices. While such a regularization is known to smooth the restored images, we propose to exploit the system capability to acquire the panchromatic image of the scene, to introduce prior information on the sharp edges of the image, leading to a fast and edge-preserved reconstruction of the image.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Compressed Edge Spectrum Sensing for Wideband Cognitive Radios.\n \n \n \n \n\n\n \n Beck, E.; Bockelmann, C.; and Dekorsy, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 1705-1709, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"CompressedPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553617,\n  author = {E. Beck and C. Bockelmann and A. Dekorsy},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Compressed Edge Spectrum Sensing for Wideband Cognitive Radios},\n  year = {2018},\n  pages = {1705-1709},\n  abstract = {Free licensed spectral bands have become rare due to the increasing number of wireless users and their demand for high data rates. Likewise, the static allocation of these bands results in an under-utilization of the spectrum. Cognitive Radio (CR) has emerged as a promising solution to the dilemma by allowing opportunistic users to transmit in the absence of licensed users. Spectrum sensing is therefore the key component of CR and coexistence management in general. In order to detect as much transmission opportunities as possible, a large bandwidth has to be monitored which according to Shannon-Nyquist necessitates high sampling rates. For fast and accurate spectrum estimation, we propose a novel approach called Compressed Edge Spectrum Sensing (CESS) which exploits the sparsity of power spectrum edges and allows for sampling down to 6% of Nyquist without losses in the detection accuracy of occupied and unoccupied spectrum regions.},\n  keywords = {cognitive radio;Shannon-Nyquist necessitates high sampling rates;unoccupied spectrum regions;occupied spectrum regions;power spectrum edges;accurate spectrum estimation;licensed users;opportunistic users;Cognitive Radio;static allocation;high data rates;wireless users;spectral bands;wideband cognitive radios;Edge Spectrum Sensing;Sensors;Image edge detection;Bandwidth;White spaces;Europe;Signal processing;Cognitive radio},\n  doi = {10.23919/EUSIPCO.2018.8553617},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570435771.pdf},\n}\n\n
\n
\n\n\n
\n Free licensed spectral bands have become rare due to the increasing number of wireless users and their demand for high data rates. Likewise, the static allocation of these bands results in an under-utilization of the spectrum. Cognitive Radio (CR) has emerged as a promising solution to the dilemma by allowing opportunistic users to transmit in the absence of licensed users. Spectrum sensing is therefore the key component of CR and coexistence management in general. In order to detect as much transmission opportunities as possible, a large bandwidth has to be monitored which according to Shannon-Nyquist necessitates high sampling rates. For fast and accurate spectrum estimation, we propose a novel approach called Compressed Edge Spectrum Sensing (CESS) which exploits the sparsity of power spectrum edges and allows for sampling down to 6% of Nyquist without losses in the detection accuracy of occupied and unoccupied spectrum regions.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Multiple Cyber-Threats Containment Via Kendall's Birth-Death-Immigration Model.\n \n \n \n \n\n\n \n Matta, V.; Di Mauro, M.; Longo, M.; and Farina, A.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2554-2558, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"MultiplePaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553618,\n  author = {V. Matta and M. {Di Mauro} and M. Longo and A. Farina},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Multiple Cyber-Threats Containment Via Kendall's Birth-Death-Immigration Model},\n  year = {2018},\n  pages = {2554-2558},\n  abstract = {This work examines the problem of modeling and containing multiple cyber-threats that propagate across multiple subnets of a data network. With regard to threat modeling we propose to employ the Birth-Death-Immigration (BDI) model pioneered by Kendall in his seminal work of 1948 [1]. With regard to threat containment assuming that a certain resource budget is available to mitigate the threats we illustrate how the notable properties of the BDI model can be exploited to provide the optimal resource allocation across the attacked subnets.},\n  keywords = {demography;resource allocation;security of data;multiple subnets;data network;threat modeling;BDI model;birth-death-immigration model;multiple cyber-threats containment;optimal resource allocation;Mathematical model;Curing;Random variables;Stochastic processes;Probability distribution;Signal processing;Europe},\n  doi = {10.23919/EUSIPCO.2018.8553618},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570436832.pdf},\n}\n\n
\n
\n\n\n
\n This work examines the problem of modeling and containing multiple cyber-threats that propagate across multiple subnets of a data network. With regard to threat modeling we propose to employ the Birth-Death-Immigration (BDI) model pioneered by Kendall in his seminal work of 1948 [1]. With regard to threat containment assuming that a certain resource budget is available to mitigate the threats we illustrate how the notable properties of the BDI model can be exploited to provide the optimal resource allocation across the attacked subnets.\n
\n\n\n
\n\n\n
\n \n\n \n \n \n \n \n \n Abnormal Behavior Detection in Crowded Scenes Using Density Heatmaps and Optical Flow.\n \n \n \n \n\n\n \n Lazaridis, L.; Dimou, A.; and Daras, P.\n\n\n \n\n\n\n In 2018 26th European Signal Processing Conference (EUSIPCO), pages 2060-2064, Sep. 2018. \n \n\n\n\n
\n\n\n\n \n \n \"AbnormalPaper\n  \n \n\n \n \n doi\n  \n \n\n \n link\n  \n \n\n bibtex\n \n\n \n  \n \n abstract \n \n\n \n\n \n \n \n \n \n \n \n\n  \n \n \n \n \n\n\n\n
\n
@InProceedings{8553620,\n  author = {L. Lazaridis and A. Dimou and P. Daras},\n  booktitle = {2018 26th European Signal Processing Conference (EUSIPCO)},\n  title = {Abnormal Behavior Detection in Crowded Scenes Using Density Heatmaps and Optical Flow},\n  year = {2018},\n  pages = {2060-2064},\n  abstract = {Crowd behavior analysis is an arduous task due to scale, light and crowd density variations. This paper aims to develop a new method that can precisely detect and classify abnormal behavior in dense crowds. A two-stream network is proposed that uses crowd density heat-maps and optical flow information to classify abnormal events. Work on this network has highlighted the lack of large scale relevant datasets due to the fact that dealing and annotating such kind of data is a highly time consuming and demanding task. Therefore, a new synthetic dataset has been created using the Grand Theft Auto V engine which offers highly detailed simulated crowd abnormal behaviors.},\n  keywords = {behavioural sciences computing;image motion analysis;image sequences;object detection;abnormal behavior detection;crowded scenes;density heatmaps;crowd behavior analysis;crowd density variations;dense crowds;two-stream network;density heat-maps;optical flow information;scale relevant datasets;crowd abnormal behaviors;Heating systems;Videos;Training;Convolution;Optical distortion;Optical imaging;Optical signal processing},\n  doi = {10.23919/EUSIPCO.2018.8553620},\n  issn = {2076-1465},\n  month = {Sep.},\n  url = {https://www.eurasip.org/proceedings/eusipco/eusipco2018/papers/1570439443.pdf},\n}\n\n
\n
\n\n\n
\n Crowd behavior analysis is an arduous task due to scale, light and crowd density variations. This paper aims to develop a new method that can precisely detect and classify abnormal behavior in dense crowds. A two-stream network is proposed that uses crowd density heat-maps and optical flow information to classify abnormal events. Work on this network has highlighted the lack of large scale relevant datasets due to the fact that dealing and annotating such kind of data is a highly time consuming and demanding task. Therefore, a new synthetic dataset has been created using the Grand Theft Auto V engine which offers highly detailed simulated crowd abnormal behaviors.\n
\n\n\n
\n\n\n\n\n\n
\n
\n\n\n\n\n
\n\n\n \n\n \n \n \n \n\n
\n"}; document.write(bibbase_data.data);